Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

spring-interview-questions-answers

Spring MVC Form Tutorial

This tutorial will show how to handle a form submission in Spring MVC. We will define a controller to handle the page load and the form submission. You can grab the code on GitHub. Prerequisites: You should have a working Spring MVC Application. If you do not already have a working Spring MVC application set up, follow this tutorial. For this tutorial, we are going to make a simple form for subscribing to a newsletter. The form will have the following fields:name – input field age – input field email – input field gender – select drop-down receiveNewsletter – checkbox newsletterFrequency – select drop-downRequirements:The newsletterFrequency drop-down should only be active if the receiveNewsletter checkbox is checked We will not be performing any validations in this example (stay-tuned for future tutorial) When the user submits the form, the same page will reload Reloaded page should display a message that indicates that the submission was successful and shows the saved valuesWhen we’re done, we will have a page that looks like this:First, let’s set up the object we will use to store the subscriber’s information. Create the class Subscriber in package com.codetutr.form. This is a basic Java bean. Notice we are using enumerations to store the gender and newsletter frequency fields. For simplicity, I defined the enums in the same class. Also notice that we are defining the toString. This is just so we can easily get the values to print after submission. Subscriber.java package com.codetutr.form;public class Subscriber {private String name; private String email; private Integer age; private Gender gender; private Frequency newsletterFrequency; private Boolean receiveNewsletter;public enum Frequency { HOURLY, DAILY, WEEKLY, MONTHLY, ANNUALLY }public enum Gender { MALE, FEMALE }public String getName() { return name; }public void setName(String name) { this.name = name; }public String getEmail() { return email; }public void setEmail(String email) { this.email = email; }public Integer getAge() { return age; }public void setAge(Integer age) { this.age = age; }public Gender getGender() { return gender; }public void setGender(Gender gender) { this.gender = gender; }public Frequency getNewsletterFrequency() { return newsletterFrequency; }public void setNewsletterFrequency(Frequency newsletterFrequency) { this.newsletterFrequency = newsletterFrequency; }public Boolean getReceiveNewsletter() { return receiveNewsletter; }public void setReceiveNewsletter(Boolean receiveNewsletter) { this.receiveNewsletter = receiveNewsletter; }@Override public String toString() { return "Subscriber [name=" + name + ", age=" + age + ", gender=" + gender + ", newsletterFrequency=" + newsletterFrequency + ", receiveNewsletter=" + receiveNewsletter + "]"; }} Now, let’s create the controller. Create class FormController in package com.codetutr.controller: FormController.java package com.codetutr.controller;import org.springframework.stereotype.Controller; import org.springframework.ui.Model; import org.springframework.web.bind.annotation.ModelAttribute; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod;import com.codetutr.form.Subscriber; import com.codetutr.form.Subscriber.Frequency;@Controller public class FormController {@ModelAttribute("frequencies") public Frequency[] frequencies() { return Frequency.values(); }@RequestMapping(value="form", method=RequestMethod.GET) public String loadFormPage(Model m) { m.addAttribute("subscriber", new Subscriber()); return "formPage"; }@RequestMapping(value="form", method=RequestMethod.POST) public String submitForm(@ModelAttribute Subscriber subscriber, Model m) { m.addAttribute("message", "Successfully saved person: " + subscriber.toString()); return "formPage"; } } Let’s look at a few things in the code above. First, notice that both request handlers (methods annotated with @RequestMapping) are mapped to the same URL – “form”. The only difference in the mapping is that one handles an HTTP GET request, and the other a POST. The first handler (for the GET request) will be invoked when the user navigates to the “form” page, because they will access the page using a GET request. The POST handler is invoked when the form is submitted (since it will be submitted via HTTP POST to the “form” URL). You could, of course, submit your form to any URL using any HTTP method – just make sure to map your handler accordingly here. Let’s look at the GET handler. It takes a Model, which we populate with an empty Subscriber object. This object is what we will use to populate our form. We are not setting any values here, but if we wanted to, say default the receiveNewsletter checkbox to true and set default newsletter frequency to hourly, we could do: Subscriber subscriber = new Subscriber(); subscriber.setReceiveNewsletter(true); subscriber.setNewsletterFrequency(Frequency.HOURLY); m.addAttribute("subscriber", subscriber); Also note that if we do not add an object called “subscriber” to the model, Spring would complain when we try to access the JSP, because we will be setting up the JSP to bind the form to the “subscriber” model attribute. You would see a JSP error: “Neither BindingResult nor plain target object for bean name ‘subscriber’ available as request attribute” and the JSP would not render. The last thing to look at in the controller code is the @ModelAttribute method. When a method is annotated with @ModelAttribute, Spring runs it before each handler method and adds the return value to the model. We specified in the annotation to add the Frequency values to the model as “frequencies”. This object will be used to populate the newsletter frequency drop-down box in the JSP form. Instead of using the @ModelAttribute method, we could have added the following line to each of the request handlers: m.addAttribute("frequencies", Frequency.values()) Finally, let’s set up the jsp. Create a file called formPage.jsp in WEB-INF/view (or wherever you have configured your JSPs to reside): formPage.jsp <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %><!DOCTYPE HTML> <html> <head> <title>Sample Form</title> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script> <style> body { background-color: #eee; font: helvetica; } #container { width: 500px; background-color: #fff; margin: 30px auto; padding: 30px; border-radius: 5px; box-shadow: 5px; } .green { font-weight: bold; color: green; } .message { margin-bottom: 10px; } label {width:70px; display:inline-block;} form {line-height: 160%; } .hide { display: none; } </style> </head> <body><div id="container"><h2>Subscribe to The Newsletter!</h2> <c:if test="${not empty message}"><div class="message green">${message}</div></c:if><form:form modelAttribute="subscriber"> <label for="nameInput">Name: </label> <form:input path="name" id="nameInput" /> <br/><label for="ageInput">Age: </label> <form:input path="age" id="ageInput" /> <br/><label for="emailInput">Email: </label> <form:input path="email" id="emailInput" /> <br/><label for="genderOptions">Gender: </label> <form:select path="gender" id="genderOptions"> <form:option value="">Select Gender</form:option> <form:option value="MALE">Male</form:option> <form:option value="FEMALE">Female</form:option> </form:select> <br/><label for="newsletterCheckbox">Newsletter? </label> <form:checkbox path="receiveNewsletter" id="newsletterCheckbox" /> <br/> <label for="frequencySelect">Freq:</label> <form:select path="newsletterFrequency" id="frequencySelect"> <form:option value="">Select Newsletter Frequency: </form:option> <c:forEach items="${frequencies}" var="frequency"> <form:option value="${frequency}">${frequency}</form:option> </c:forEach> </form:select> <br/><br/> <input type="submit" value="Submit" /> </form:form> </div><script type="text/javascript">$(document).ready(function() {toggleFrequencySelectBox(); // show/hide box on page load$('#newsletterCheckbox').change(function() { toggleFrequencySelectBox(); })});function toggleFrequencySelectBox() { if(!$('#newsletterCheckbox').is(':checked')) { $('#frequencySelect').val(''); $('#frequencySelect').prop('disabled', true); } else { $('#frequencySelect').prop('disabled', false); } }</script></body> </html> Let’s walk through the form tags we are using. Notice the line at the top of the page: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %>. This imports the Spring Form tags we will be using. When we open the form with the <form:form> tag, note that we are specifying the model attribute. This tells Spring to look for an attribute in the Model and bind it to the form. The action and method attributes can also be specified. If unspecified (as in this example), they default to the current URL and “POST”, respectively (just like regular HTML forms). Notice that each of our input fields is using the Spring Form taglib (form: prefix). Each of these fields also specifies a path attribute. This must correspond to a getter or setter of the model attribute (in our case, the Subscriber class) according to the standard Java bean convention (get/is, set prefixed to field name with first letter capitalized). When the page is loaded, the input fields are populated by Spring, which calls the getter of each field bound to an input field. When the form is submitted, the setters are called to save the values of the form to the object. The <form:input> tags are pretty self explanatory. Notice the two instances of <form:select> used. In the first select drop-down, for the gender field, notice that we manually list all of the options. In the newsletter frequency select drop-down, though, we loop through the frequencies model attribute (remember we added that to the model through the @ModelAttribute-annotated method in the Controller) and add each item as an option in the drop-down. Spring automatically will bind the form values to the enums when the form is submitted as long as the value of the selected option is a valid enum name. When the form is submitted, the POST handler in the controller is invoked. The form is automatically bound to the subscriber argument that we passed in. The @ModelAttribute annotation isn’t actually necessary here. I will write more about that in another post. There you have it! I strongly recommend you download the source and run the code. Post any questions you have in the comments below. Full Source: ZIP, GitHub To run the code from this tutorial: Must have Gradle installed. Download the ZIP. Extract. Open command prompt to extracted location. Run gradle jettyRunWar. Navigate in browser to http://localhost:8080/form. ResourcesSpring Form TagLib Reference Documentation DZone – Spring Form Tag Tutorial  Reference: Spring MVC Form Tutorial from our JCG partner Steve Hanson at the CodeTutr blog. ...
java-logo

HotSpot GC Thread CPU footprint on Linux

The following question will test your knowledge on garbage collection and high CPU troubleshooting for Java applications running on Linux OS. This troubleshooting technique is especially crucial when investigating excessive GC and / or CPU utilization. It will assume that you do not have access to advanced monitoring tools such as Compuware dynaTrace or even JVisualVM. Future tutorials using such tools will be presented in the future but please ensure that you first master the base troubleshooting principles. Question: How can you monitor and calculate how much CPU % each of the Oracle HotSpot or JRockit JVM garbage collection (GC) threads is using at runtime on Linux OS? Answer: On the Linux OS, Java threads are implemented as native Threads, which results in each thread being a separate Linux process. This means that you are able to monitor the CPU % of any Java thread created by the HotSpot JVM using the top –H command (Threads toggle view). That said, depending of the GC policy that you are using and your server specifications, the HotSpot & JRockit JVM will create a certain number of GC threads that will be performing young and old space collections. Such threads can be easily identified by generating a JVM thread dump. As you can see below in our example, the Oracle JRockit JVM did create 4 GC threads identified as “(GC Worker Thread X)”.===== FULL THREAD DUMP ===============Fri Nov 16 19:58:36 2012BEA JRockit(R) R27.5.0-110-94909-1.5.0_14-20080204-1558-linux-ia32"Main Thread" id=1 idx=0x4 tid=14911 prio=5 alive, in native, waiting-- Waiting for notification on: weblogic/t3/srvr/T3Srvr@0xfd0a4b0[fat lock]at jrockit/vm/Threads.waitForNotifySignal(JLjava/lang/Object;)Z(Native Method)at java/lang/Object.wait(J)V(Native Method)at java/lang/Object.wait(Object.java:474)at weblogic/t3/srvr/T3Srvr.waitForDeath(T3Srvr.java:730)^-- Lock released while waiting: weblogic/t3/srvr/T3Srvr@0xfd0a4b0[fat lock] at weblogic/t3/srvr/T3Srvr.run(T3Srvr.java:380) at weblogic/Server.main(Server.java:67)at jrockit/vm/RNI.c2java(IIIII)V(Native Method)-- end of trace"(Signal Handler)" id=2 idx=0x8 tid=14920 prio=5 alive, in native, daemon"(GC Main Thread)" id=3 idx=0xc tid=14921 prio=5 alive, in native, native_waiting, daemon"(GC Worker Thread 1)" id=? idx=0x10 tid=14922 prio=5 alive, in native, daemon"(GC Worker Thread 2)" id=? idx=0x14 tid=14923 prio=5 alive, in native, daemon"(GC Worker Thread 3)" id=? idx=0x18 tid=14924 prio=5 alive, in native, daemon"(GC Worker Thread 4)" id=? idx=0x1c tid=14925 prio=5 alive, in native, daemon……………………… Now let’s put all of these principles together via a simple example. Step #1 – Monitor the GC thread CPU utilization The first step of the investigation is to monitor and determine:Identify the native Thread ID for each GC worker thread shown via the Linux top –H command.Identify the CPU % for each GC worker thread.Step #2 – Generate and analyze JVM Thread Dumps At the same time of Linux top –H, generate 2 or 3 JVM Thread Dump snapshots via kill -3 <Java PID>.Open the JVM Thread Dump and locate the JVM GC worker threads.Now correlate the “top -H” output data with the JVM Thread Dump data by looking at the native thread id (tid attribute).As you can see in our example, such analysis did allow us to determine that all our GC worker threads were using around 20% CPU each. This was due to major collections happening at that time. Please note that it is also very useful to enable verbose:gc as it will allow you to correlate such CPU spikes with minor and major collections and determine how much your JVM GC process is contributing to the overall server CPU utilization.   Reference: HotSpot GC Thread CPU footprint on Linux from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns blog. ...
software-development-2-logo

Going Dawkins on god objects

Meet the enemy. Steaming stinking jungle crushing all about you, insects infesting the air, strange grunts shuddering in the foliage, you pause deep in uncharted refactoring, far from respite, far from home, far from help. Sweat oozes from all pores. Your machete, notched and dulled from ten thousand hacks, hangs weary in your calloused grip while your mind mumbles to itself, exhausted. Behind you no track records your passing: the countless vines severed, the countless dependencies torn asunder, the countless decrepit classes heaved aside, none of these has changed the tangling madness. All your agonized restructuring seems wasted, meaningless, amid the yelping brutes scuttling unglimpsed beyond the trees.  Then you see the darkness. Lost in toil, you had not noticed it as you approached, its form an expansive yet subtle shading. Dread floods your soul. You stagger through the treeline, drawn to the horror, and the light begins to fail with each step. Weak slashing fails to keep the briars from your flesh but you push on. You have to see it. You must see it. A final wall of sticky vegetation parts and you stumble through. The vista opens before you. The monster reveals itself. God object. Immeasurably vast, the foul beast sprawls from forest floor to canopy, smothering the sky. Tendrils snake everywhere, radiating from the central denseness to choke every class they touch. Worse still, fat tuberous dependencies thrash towards the god object’s gigantic core from all sides, an insanity of twisting and coiling and piercing. Defeated, you collapse to your knees, machete spilling from your hand. Despair pounces. You bow your head. There, on the ground before you, something glints, something silver and out-of-place. With your last strength you reach out to pick up the metal tag. Faint writing scores its surface. It reads, “MavenProject.java”. Scales and sightings.To the term, “God object,” many object. The animal in question is not at all an object but a class, not an electrically-driven choreography of registers pinging data across high-speed buses but a file of text, albeit a commodious one. Nor would many quite agree with the honourable Wikipedia’s claim that our quarry need have, ” … most of a program’s overall functionality … coded into a single ‘all-knowing’ object, which maintains most of the information about the entire program and provides most of the methods for manipulating this data.” Given today’s corporate behemoths, a single class housing the better part of a banking system, for example, would be a thing of splendid wonder. Instead, most would consider a god object to be any class that was just doing far too much, had far too many responsibilities or just had far too many lines of code. This begs the question: what is far too much? Few would consider a class of two hundred lines of text far too anything. It might be, of course: outliers and rare exceptions exist, but a reasonable class of two hundred lines would hardly raise suspicions purely because of its size. A class of five hundred lines might meet some jeering, some stern interrogation; yet it, too, could sail through a review if justified by a knowledgeable designer. There are always reasons for these things. At a thousand lines long, however, a class has crossed an unspoken boundary. Good taste has been snubbed, good breeding insulted. Someone’s whipped out a can of lager at the wine-tasting. A thousand lines just seems so breakupable. Has this class really so focused a single yet gargantuan responsibility that it could not be split in two? Really? “Now you’re just being silly,” is certainly the politest version of comment heard when a class of fifteen hundred lines steps out of the taxi and onto the reviewers’ red carpet. The paparazzi flash their cameras but disbelief rather than admiration motivates their whirring shutters. Two thousand lines perhaps – just perhaps – denotes ascension to godliness. It may not accommodate most of the program’s functionality but dear, oh dear a class wily enough to evade reviewer after blade-wielding reviewer and grow so large almost deserves a shrine or two erected in its honour. That such a class might justify its own non-decomposability would strain the belief of even the most gullible software architect. God objects, furthermore, fall into two categories. The first is the autonomous god object, which, while screaming demented abuse at all good structure principles, at least has the grace to do so by itself, secluded in a corner somewhere. Few classes depend on this type of god object; one or two other classes must prod it into action, after which it will bluster about doing an enormous amount of stuff, but at least splitting it into smaller classes will not ripple back up through the system precisely because so few other classes depend on it. Its relative isolation makes this type of god object the lesser of the two evils. The communal god object, the second category, boasts a vast number of dependent classes. This type can have twenty, thirty or even forty other classes directly dependent on it, all of which cover their eyes and inhale through clenched teeth when a programmer attempts a decomposition. Listen for the crunch-crunch-crunch as you approach for the floors about these god objects lie littered with the shards of shattered careers. Have you ever seen one? Have you ever seen a communal god object? If not, sit back and find something tough to bite down on … Behold, the gorgon!No image guarantees the presence of the beast. Structural diagrams can only ever raise not answer questions. A cursory trek through the MavenProject class itself reveals, however, a full 2200 lines of code, four times what some consider a healthy size and practically a package in its own right. The question is: when face-to-face with such a bruiser, what should you do? Programmers usually do one of three things. First, they may simply look the other way, a common solution. Second, they may start ripping out what methods they can into smaller classes. Both options have their merits. The third option, though, often goes overlooked. This option acknowledges the dynamism of the god object; they grow so big because they do just that: they grow. They seldom stand still and they usually do not stop gobbling up functionality just because you decide to step in and take control. With the presumption of continued growth, option two looks shaky: like a hydra, the god object watches the classes you chop off soon themselves begin to writhe and gobble and grow. Cohesion degrades and with all those dependencies skewering directly into the new implementation classes coupling becomes problematic. The third option recommends that you do not reduce the size of the god object at all. At least initially. Instead, you merely begin creating small interfaces – implemented by the god object – and have erstwhile clients of the god object redirect their attention to these interfaces. Opinions divide on the nature of these small interfaces. One camp suggests analyzing the god object’s functions and grouping them semantically. This has the advantage of keeping the new interfaces disjoint, sharing no common function declarations, but the disadvantage of having clients receive interfaces all of whose functions they probably will not use. The other camp suggests creating a new interface for each client package (or significant class), making the interfaces more client- than provider-specific. This has the advantage that clients use precisely and fully the interfaces that they receive but the disadvantage that many interfaces might declare the same functions. In practice, some begin with the second approach and when finished gather duplicated function declarations into common, extended interfaces. However achieved, this is but the first step. Once the penultimate dependency on the god object snaps still the knives should remained sheathed. The beast must first be made package-private (Java’s default access modifier) so that future clients cannot bypass the interfaces and then preferably be moved to its own, new package, leaving the interfaces behind. Only then, when the god object is isolated from all else and with the interfaces now revealing themselves as a Facade, can the dissections begin. That might sound like a lot of work. It is. No one said this would be easy. The final configuration, however, offers a Facade of interfaces to clients untroubled by decoupled implementation refactorings, and a separate package from which unearthly screams radiate throughout the entire program. For a while. Just for fun, we can take an X-ray of our MavenProject class to see what its guts look like. Often large classes suggest their own extractions and as we see from figure 2 that deepCopy() function looks tempting.Summary. God objects: don’t. Photo credit attribution.CC Image Jungle and The Sun courtesy of yassina on Flickr. CC Image Perseus triumphant with Medusa head courtesy of Monitotxi on Flickr.  Reference: Going Dawkins on god objects from our JCG partner Edmund Kirwan at the A blog about software. blog. ...
software-development-2-logo

Architecture-Breaking Bugs – when a Dreamliner becomes a Nightmare

The history of computer systems is also the history of bugs, including epic, disastrous bugs that have caused millions of $ in damage and destruction and even death, as well as many other less spectacular but expensive system and project failures. Some of these appear to be small and stupid mistakes, like the infamous Ariane 5 rocket crash, caused by a one-line programming error. But a one-line programming error, or any other isolated mistake or failure, cannot cause serious damage to a large system, without fundamental failures in architecture and design, and failures in management.Boeing’s 787 Dreamliner Going Nowhere The Economist, Feb 26 2013 These kinds of problems are what Barry Boehm calls “Architecture Breakers”: where a system’s design doesn’t hold up in the real world, when you run face-first into a fundamental weakness or a hard limit on what is possible with the approach that you took or the technology platform that you selected. Architecture Breakers happen at the edges – or beyond the edges – of the design, off of the normal, nominal, happy paths. The system works, except for a “one in a million” exceptional error, which nobody takes seriously until a “once in a million” problem starts happening every few days. Or the system crumples under an unexpected surge in demand, demand that isn’t going to go away unless you can’t find a way to quickly scale the system to keep up – and if you can’t, you won’t have a demand problem any more because those customers won’t be coming back. Or what looks like a minor operational problem turns out to be the first sign of a fundamental reliability or safety problem in the system.Dreamliner is Troubled by Questions about Safety NY Times, Jan 10, 2013 Finding Architecture Breakers It starts off with a nasty bug or an isolated operational issue or a security incident. As you investigate and start to look deeper you find more cases, gaping holes in the design, hard limits to what the system can do, or failures that can’t be explained and can’t be stopped. The design starts to unravel as each problem opens up to another problem. Fixing it right is going to take time and money, maybe even going back to the drawing board and revisiting foundational architectural decisions and technology choices. What looked like a random failure or an ugly bug just turned into something much uglier, and much much more expensive.Deepening Crisis for the Boeing 787 NY Times, Jan 17 2013 What makes these problems especially bad is that they are found late, way past design and even past acceptance testing, usually when the system is already in production and you have a lot of real customers using it to get real work done. This is when you can least afford to encounter a serious problem. When something does go wrong, it can be difficult to recognize how serious it is right away. It can take two or three or more failures before you realize – and accept – how bad things really are and before you see enough of a pattern to understand where the problem might be.Boeing Batteries Said to Fail 10 Times Before Incident Bloomberg, Jan 30 2013 By then you may be losing customers and losing money and you’re under extreme pressure to come up with a fix, and nobody wants to hear that you have to stop and go back and rewrite a piece of the system, or re-architect it and start again – or that you need more time to think and test and understand what’s wrong and what your options are before you can even tell them how long it might take and how much it could cost to fix things.Regulators Around the Globe Ground Boeing 787s NY Times, Jan 18 2013 What can Break your Architecture? Most Architecture Breakers are fundamental problems in important non-functional aspects of a system:Stability and data integrity: some piece of the system won’t stay up under load or fails intermittently after the system has been running for hours or days or weeks, or you lost critical customer data or you can’t recover and restore service fast enough after an operational failure. Scalability and throughput: the platform (language or container or communications fabric or database – or all of them) are beautiful to work with, but can’t keep up as more customers come in, even if you throw more hardware at it. Ask Twitter about trying to scale-out Ruby or Facebook about scaling PHP or anyone who has ever tried to scale-out Oracle RAC. Latency – requirements for real-time response-time/deadline satisfaction escalate, or you run into unacceptable jitter and variability (you chose Java as your run-time platform, what happens when GC kicks in?). Security: you just got hacked and you find out that the one bug that an attacker exploited is only the first of hundreds or thousands of bugs that will need to be found and fixed, because your design or the language and the framework that you picked (or the way that you used it) is as full of security holes as Swiss cheese.These problems can come from misunderstanding what an underlying platform technology or framework can actually do – what the design tolerances for that architecture or technology are. Or from completely missing, overlooking, ignoring or misunderstanding an important aspect of the design. These aren’t problems that you can code your way out of, at least not easily. Sometimes the problem isn’t in your code any ways: it’s in a third party platform technology that can’t keep up or won’t stay up. The language itself, or an important part of the stack like the container, database, or communications fabric, or whatever you are depending on for clustering and failover or to do some other magic. At high scale in the real world, almost any piece of software that somebody else wrote can and will fall short of what you really need, or what the vendor promised.Boeing, 787 Battery Supplier at Odds over Fixes Wall Street Journal, Feb 27 2013 You’ll have to spend time working with a vendor (or sometimes with more than one vendor) and help them understand your problem, and get them to agree that it’s really their problem, and that they have to fix it, and if they can’t fix it, or can’t fix it quickly enough, you’ll need to come up with a Plan B quickly, and hope that your new choice won’t run into other problems that may be just as bad or even worse. How to Avoid Architecture Breakers Architecture Breakers are caused by decisions that you made early and got wrong – or that you didn’t make early enough, or didn’t make at all. Boehm talks about Architecture Breakers as part of an argument against Simple Design – that many teams, especially Agile teams, spend too much time focused on the happy path, building new features to make the customer happy, and not enough time on upfront architecture and thinking about what could go wrong. But Architecture Breakers have been around a lot longer than Agile and simple design: in Making Software (Chapter 10 Architecting: How Much and When), Boehm goes back to the 1980s when he first recognized these kinds of problems, when Structured Programming and later Waterfall were the “right way” to do things. Boehm’s solution is more and better architecture definition and technical risk management through Spiral software development: a lifecycle with architecture upfront to identify risk areas, which are then explored through iterative, risk-driven design, prototyping and development in multiple stages. Spiral development is like today’s iterative, incremental development methods on steroids, using risk-based architectural spikes, but with much longer iterative development and technical prototyping cycles, more formal risk management, more planning, more paperwork, and much higher costs. Bugs like these can’t all be solved by spending more time on architecture and technical risk management upfront – whether through Spiral development or a beefed up, disciplined Agile development approach. More time spent upfront won`t help if you make naïve assumptions about scalability, responsiveness or reliability or security; or if you don’t understand these problems well enough to identify the risks. Architecture Breakers won’t be found in design reviews – because you won’t be looking for something that you don’t know could a problem – unless maybe you are running through structured failure modelling exercises like FMEA (Failure mode and effect analysis) or FMECA (Failure mode, effects and criticality analysis), which force you to ask hard questions, but which few people outside of regulated industries have even heard about. And Architecture Breakers can’t all be caught in testing, even extended longevity/soak testing and extensive fuzzing and simulated failures and fault injection and destructive testing and stress testing – even if all the bugs that are found this way are taken seriously (because these kinds of extreme tests are often considered unrealistic). You have to be prepared to deal with Architecture Breakers. Anticipating problems and partitioning your architecture using something like the Stability Patterns in Michael Nygard’s excellent book Release It! will at least keep serious run-time errors from spreading and taking an entire system out (these strategies will also help with scaling and with containing security attacks). And if and when you do see a “once in a million” error in reviews or testing or production, understand how serious it can be, and act right away – before a Dreamliner turns into a nightmare.   Reference: Architecture-Breaking Bugs – when a Dreamliner becomes a Nightmare from our JCG partner Jim Bird at the Building Real Software blog. ...
scala-logo

Scala traits implementation and interoperability. Part II: Traits linearization

This is a continuation of Scala traits implementation and interoperability. Part I: Basics. Dreadful diamond problem can be mitigated using Scala traits and a process called linearization. Take the following example:                 trait Base { def msg = "Base" } trait Foo extends Base { abstract override def msg = "Foo -> " + super.msg } trait Bar extends Base { abstract override def msg = "Bar -> " + super.msg } trait Buzz extends Base { abstract override def msg = "Buzz -> " + super.msg } class Riddle extends Base with Foo with Bar with Buzz { override def msg = "Riddle -> " + super.msg } Now let me ask you a little question: what is the output of (new Riddle).msg?Riddle -> Base Riddle -> Buzz -> Base Riddle -> Foo -> Base Riddle -> Buzz -> Bar -> Foo -> BaseIt’s not (1) because Base.msg is overriden by all traits we extend, so that shouldn’t be a surprise. But it’s also not (2) and (3). One might expect that either Buzz or Foo is printed, remembering that you can stack traits and either first or last (actually: last) wins. So why Riddle -> Buzz -> Base is incorrect? Isn’t Buzz.msg calling super.msg and Buzz explicitly states Base being it’s parent? There is a bit of magic here. When you stack multiple traits as we did (extends Base with Foo with Bar with Buzz) Scala compiler orders them (linearizes) so that there is always one path from every class to the parent (Base). The order is determined by the reversed order of traits mixed in (last one wins and becomes first). Why would you ever…? Turns out stackable traits are great for implementing several layers of decoration around real object. You can easily add decorators and move them around. We have a simple calculator abstraction and one implementation: trait Calculator { def increment(x: Int): Int } class RealCalculator extends Calculator { override def increment(x: Int) = { println(s"increment($x)") x + 1 } } We came up with three aspect we would like to selectively apply depending on some circumstances: logging all increment() invocations, caching and validation. First let’s define all of them: trait Logging extends Calculator { abstract override def increment(x: Int) = { println(s"Logging: $x") super.increment(x) } } trait Caching extends Calculator { abstract override def increment(x: Int) = if(x < 10) { //silly caching... println(s"Cache hit: $x") x + 1 } else { println(s"Cache miss: $x") super.increment(x) } } trait Validating extends Calculator { abstract override def increment(x: Int) = if(x >= 0) { println(s"Validation OK: $x") super.increment(x) } else throw new IllegalArgumentException(x.toString) } Creating "raw" calculator is of course possible: val calc = new RealCalculator calc: RealCalculator = RealCalculator@bbd9e6 scala> calc increment 17 increment(17) res: Int = 18 But we are free to mix-in as many trait mixins as we want, in any order: scala> val calc = new RealCalculator with Logging with Caching with Validating calc: RealCalculator with Logging with Caching with Validating = $anon$1@1aea543 scala> calc increment 17 Validation OK: 17 Cache miss: 17 Logging: 17 increment(17) res: Int = 18 scala> calc increment 9 Validation OK: 9 Cache hit: 9 res: Int = 10 See how subsequent mixins kick in? Of course each mixin can skip super call, e.g. on cache hit or validation failure. Just to be clear here - it doesn't matter that each of decorating mixins have Calculator defined as a base trait. super.increment() is always routed to next trait in stack (previous one in the class declaration). That means super is more dynamic and dependant on target usage rather than declaration. We will explain this later but first another example: let's put logging before caching so no matter whether there was cache hit or miss, we always get logging. Moreover we "disable" validation by simply skipping it: scala> class VerboseCalculator extends RealCalculator with Caching with Logging defined class VerboseCalculator scala> val calc = new VerboseCalculator calc: VerboseCalculator = VerboseCalculator@f64dcd scala> calc increment 42 Logging: 42 Cache miss: 42 increment(42) res: Int = 43 scala> calc increment 4 Logging: 4 Cache hit: 4 res: Int = 5 I promised to explain how stacking works underneath. You should be really curious how this "funky" super is implemented as it cannot simply rely on invokespecial bytecode instruction, used with normal super. Unfortunately it's complex, but worth to know and understand, especially when stacking doesn't work as expected. Calculator and RealCalculator compile pretty much exactly to what you might have expected: public interface Calculator { int increment(int i); } public class RealCalculator implements Calculator { public int increment(int x) { return x + 1; } } But how would the following class be implemented? class FullBlownCalculator extends RealCalculator with Logging with Caching with Validating Let's start from the class itself: public class FullBlownCalculator extends RealCalculator implements Logging, Caching, Validating { public int increment(int x) { return Validating$class.increment(this, x); } public int Validating$$super$increment(int x) { return Caching$class.increment(this, x); } public int Caching$$super$increment(int x) { return Logging$class.increment(this, x); } public int Logging$$super$increment(int x) { return super.increment(x); } } Can you see what's going on here? Before I show the implementations of all these *$class classes, spend a little bit of time confronting class declaration (trait order in particular) and these awkward *$$super$* methods. Here is the missing piece that will allow us to connect all the dots: public abstract class Logging$class { public static int increment(Logging that, int x) { return that.Logging$$super$increment(x); } } public abstract class Caching$class { public static int increment(Caching that, int x) { return that.Caching$$super$increment(x); } } public abstract class Validating$class { public static int increment(Validating that, int x) { return that.Validating$$super$increment(x); } } Not helpful? Let's go slowly through the first step. When you call FullBlownCalculator, according to trait stacking rules, RealBlownCalculator.increment() should call Validating.increment(). As you can see, Validating.increment() forwards this (itself) to static Validating$class.increment() hidden class. This class expects an instance of Validating, but since FullBlownCalculator also extends that trait, passing this is fine. Now look at the Validating$class.increment(). It barely forwards FullBlownCalculator.Validating$$super$increment(x). And when we, again, go back to FullBlownCalculator we will notice that this method delegates to static Caching$class.increment(). From here the process is similar. Why the extra delegation through static method? Mixins don't know which class is going to be next in the stack ("next super"). Thus they simply delegate to appropriate virtual $$super$ family of methods. Each class using these mixins is obligated to implement them, providing correct "super". To put that into perspective: compiler cannot simply delegate straight from Validating$class.increment() to Caching$class.increment(), even though that's the FullBlowCalculator workflow. However if we create another class that reverses these mixins (RealCalculator with Validating with Caching) hardcoded dependency between mixins is no longer valid. This it's the responsibility of the class, not mixin, to declare the order. If you still don't follow, here is the complete call stack for FullBlownCalculator.increment(): val calc = new FullBlownCalculator calc increment 42 FullBlownCalculator.increment(42) `- Validating$class.increment(calc, 42) `- Validating.Validating$$super$increment(42) (on calc) `- Caching$class.increment(calc, 42) `- Caching.Caching$$super$increment(42) (on calc) `- Logging$class.increment(calc, 42) `- Logging.Logging$$super$increment(42) (on calc) `- super.increment(42) `- RealCalculator.increment(42) (on calc) Now you see why it's called "linearization"!   Reference: Scala traits implementation and interoperability. Part II: Traits linearization from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...
jcg-logo

Monetize your technical writing with the JCG Revenue Shared Program

Java Code Geeks are thrilled to announce the W4G Revenue Shared Program, the first croudsourced revenue shared platform for technical writers. We have an open invitation to all technology enthusiasts and programming geeks who are willing to write for us as guest authors. We are looking for creative writers to join our ever growing team. The content should be unique and not appear anywhere else online. By joining the W4G program, you will be able to build your personal brand and monetize your technical expertise at the same time. Java Code Geeks will provide the platform on which you will be able to serve your ads onto technical Articles, giving you the opportunity to earn revenue from your published work. To earn from your Articles, you must first sign up with an earnings program such as Google AdSense. Java Code Geeks will then serve your ads in a rotational 60% percentage of the article impressions and you will be able to monetize those impressions. Note: You shall NOT receive any money directly from Java Code Geeks, all earning will come from the third party earnings program you have joined. By joining the program, you will have the benefit of utilizing a well-established technical platform of global reach and supreme quality. You will enjoy a large amount of regular readers and followers, as well as organic search traffic attracted by our authority site. Additionally, you will be able to connect your profile on Java Code Geeks with your personal blog or website. Thus, the more you write, the more you get exposure and more chances to earn and possibly more readers for your own blog. All the links shall be do-follow on our site, so you will also receive quality back links for your site as an extra bonus. All your articles will be promoted as regular Java Code Geeks articles in our social media channels, thus ensuring maximum exposure. Joining our W4G program is simple and easy. Just send an email to our Executive Editor, Byron Kiourtzoglou. You may find our program’s terms here. So, prepare to boost your personal brand and earn some money in the process by joining the W4G program! Happy writing geeks! ...
logback-logo

SiftingAppender: logging different threads to different log files

One novel feature of Logback is SiftingAppender (JavaDoc). In short it’s a proxy appender that creates one child appender per each unique value of a given runtime property. Typically this property is taken from MDC. Here is an example based on the official documentation linked above:               <?xml version="1.0" encoding="UTF-8"?> <configuration> <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender"> <discriminator> <key>userid</key> <defaultValue>unknown</defaultValue> </discriminator> <sift> <appender name="FILE-${userid}" class="ch.qos.logback.core.FileAppender"> <file>user-${userid}.log</file> <layout class="ch.qos.logback.classic.PatternLayout"> <pattern>%d{HH:mm:ss:SSS} | %-5level | %thread | %logger{20} | %msg%n%rEx</pattern> </layout> </appender> </sift> </appender> <root level="ALL"> <appender-ref ref="SIFT" /> </root> </configuration> Notice that the <file> property is parameterized with ${userid} property. Where does this property come from? It has to be placed in MDC. For example in a web application using Spring Security I tend to use a servlet filter with a help of SecurityContextHolder: import javax.servlet._ import org.slf4j.MDC import org.springframework.security.core.context.SecurityContextHolder import org.springframework.security.core.userdetails.UserDetails class UserIdFilter extends Filter { def init(filterConfig: FilterConfig) {} def doFilter(request: ServletRequest, response: ServletResponse, chain: FilterChain) { val userid = Option( SecurityContextHolder.getContext.getAuthentication ).collect{case u: UserDetails => u.getUsername} MDC.put("userid", userid.orNull) try { chain.doFilter(request, response) } finally { MDC.remove("userid") } } def destroy() {} } Just make sure this filter is applied after Spring Security filter. But that’s not the point. The presence of ${userid} placeholder in the file name causes sifting appender to create one child appender for each different value of this property (thus: different user names). Running your web application with this configuration will quickly create several log files like user-alice.log, user-bob.log and user-unknown.log in case of MDC property not set. Another use case is using thread name rather than MDC property. Unfortunately this is not built in, but can be easily plugged in using custom Discriminator as opposed to default MDCBasedDiscriminator: public class ThreadNameBasedDiscriminator implements Discriminator<ILoggingEvent> { private static final String KEY = "threadName"; private boolean started; @Override public String getDiscriminatingValue(ILoggingEvent iLoggingEvent) { return Thread.currentThread().getName(); } @Override public String getKey() { return KEY; } public void start() { started = true; } public void stop() { started = false; } public boolean isStarted() { return started; } } Now we have to instruct logback.xml to use our custom discriminator: <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender"> <discriminator class="com.blogspot.nurkiewicz.ThreadNameBasedDiscriminator"/> <sift> <appender class="ch.qos.logback.core.FileAppender"> <file>app-${threadName}.log</file> <layout class="ch.qos.logback.classic.PatternLayout"> <pattern>%d{HH:mm:ss:SSS} | %-5level | %logger{20} | %msg%n%rEx</pattern> </layout> </appender> </sift> </appender> Note that we no longer put %thread in PatternLayout – it is unnecessary as thread name is part of the log file name:app-main.log app-http-nio-8080-exec-1.log app-taskScheduler-1 app-ForkJoinPool-1-worker-1.log …and so forthThis is probably not the most convenient setup for server application, but on desktop where you have a limited number of focused threads like EDT, IO thread, etc. it might be a vital alternative.   Reference: SiftingAppender: logging different threads to different log files from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...
devops-logo

Monitoring S3 uploads for a real time data

If you are working on Big Data and its bleeding edge technologies like Hadoop etc., the primary thing you need is a “dataset” to work on. So, this data can be reviews, blogs, news, social media data (Twitter, Facebook etc), domain specific data, research data, forums, groups, feeds, fire hose data etc. Generally, companies reach the data vendors to fetch such kind of data. Normally, these data vendors dump the data into a shared server kind of environment. For us to use this data for processing with MapReduce and so forth, we move them to S3 for storage first and processing next. Assume, the data belong to social media such as Twitter or Facebook, then the data can be dumped according to the date format directory. Majority of the cases, its the practice. Also assuming 140-150GB /day being dumped in a hierarchy like 2013/04/15 ie. yyyy/mm/dd format, stream of data, how do youupload them to s3 in the same hierarchy to a given bucket? monitor the new incoming files and upload them? save the space effectively on the disk? ensure the reliability of uploads to s3? clean if the logging is enabled to track? re-try the failed uploads?These were some of the questions, running at the back of my mind, when I wanted to automate the uploads to S3. Also, I wanted 0 human intervention or at-least the least! So, I came up withs3sync / s3cmd. the python Watcher script by Greggory Hernandez, here https://github.com/greggoryhz/WatcherA big thanks! This helped me with monitoring part and it works so great!few of my own scripts.What are the ingredients? Installation of s3sync. I have just used one script of s3cmd here and not s3sync in real. May be in future — so I have this. Install Ruby from the repository $ sudo apt-get install ruby libopenssl-ruby Confirm with the version $ ruby -v Download and unzip s3sync $ wget http://s3.amazonaws.com/ServEdge_pub/s3sync/s3sync.tar.gz $ tar -xvzf s3sync.tar.gz Install the certificates. $ sudo apt-get install ca-certificates $ cd s3sync/ Add the credentials to the s3config.yml for s3sync to connect to s3. $ cd s3sync/ $ sudo vi s3config.yml aws_access_key_id: ABCDEFGHIJKLMNOPQRST aws_secret_access_key: hkajhsg/knscscns19mksnmcns ssl_cert_dir: /etc/ssl/certs Edit aws_access_key_id and aws_secret_access_key to your own credentials. Installation of Watcher. Goto https://github.com/greggoryhz/Watcher Copy https://github.com/greggoryhz/Watcher.git to your clipboard Install git if you have not Clone the Watcher $ git clone https://github.com/greggoryhz/Watcher.git $ cd Watcher/ My own wrapper scripts. cronNext, having set up of the environment ready, lets make some common “assumptions”.Data being dumped will be at /home/ubuntu/data/ — from there it could be 2013/04/15 for ex. s3sync is located at /home/ubuntu Watcher repository is at /home/ubuntuGetting our hands dirty…Goto Watcher and set the directory to be watched for and corresponding action to be undertaken. $ cd Watcher/ Start the script, $ sudo python watcher.py start This will create a .watcher dirctory at /home/ubuntu Now, $ sudo python watcher.py stopGoto the .watcher directory created and set the destination to be watched for and action to be undertaken in jobs.yml ie. watch: and command:# Copyright (c) 2010 Greggory Hernandez# Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions:# The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software.# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE.# ---------------------------END COPYRIGHT--------------------------------------# This is a sample jobs file. Yours should go in ~/.watcher/jobs.yml # if you run watcher.py start, this file and folder will be createdjob1: # a generic label for a job. Currently not used make it whatever you want label: Watch /home/ubuntu/data for added or removed files# directory or file to watch. Probably should be abs path. watch: /home/ubuntu/data# list of events to watch for. # supported events: # 'access' - File was accessed (read) (*) # 'atrribute_change' - Metadata changed (permissions, timestamps, extended attributes, etc.) (*) # 'write_close' - File opened for writing was closed (*) # 'nowrite_close' - File not opened for writing was closed (*) # 'create' - File/directory created in watched directory (*) # 'delete' - File/directory deleted from watched directory (*) # 'self_delete' - Watched file/directory was itself deleted # 'modify' - File was modified (*) # 'self_move' - Watched file/directory was itself moved # 'move_from' - File moved out of watched directory (*) # 'move_to' - File moved into watched directory (*) # 'open' - File was opened (*) # 'all' - Any of the above events are fired # 'move' - A combination of 'move_from' and 'move_to' # 'close' - A combination of 'write_close' and 'nowrite_close' # # When monitoring a directory, the events marked with an asterisk (*) above # can occur for files in the directory, in which case the name field in the # returned event data identifies the name of the file within the directory. events: ['create', 'move_from', 'move_to']# TODO: # this currently isn't implemented, but this is where support will be added for: # IN_DONT_FOLLOW, IN_ONESHOT, IN_ONLYDIR and IN_NO_LOOP # There will be further documentation on these once they are implmented options: []# if true, watcher will monitor directories recursively for changes recursive: true # the command to run. Can be any command. It's run as whatever user started watcher. # The following wildards may be used inside command specification: # $$ dollar sign # $watched watched filesystem path (see above) # $filename event-related file name # $tflags event flags (textually) # $nflags event flags (numerically) # $dest_file this will manage recursion better if included as the dest (especially when copying or similar) # if $dest_file was left out of the command below, Watcher won't properly # handle newly created directories when watching recursively. It's fine # to leave out when recursive is false or you won't be creating new # directories. # $src_path is only used in move_to and is the corresponding path from move_from # $src_rel_path [needs doc] command: sudo sh /home/ubuntu/s3sync/monitor.sh $filename Create a script called monitor.sh to upload to s3 in s3sync directory as below.The variables you may like to change is s3bucket path in “s3path” in monitor.sh This script will upload the new incoming file detected by the watcher script in the reduced redundancy storage format. (you can remove the header — provided you are not interested to store in RRS format) The script will call s3cmd ruby script to upload recursively and thus maintains the hierarchy ie. yyyy/mm/dd format with files *.* It will delete the file successfully uploaded to s3 from the local path — to save the disk space. The script would not delete the directory, as it will be taken care by yet another script re-upload.sh, which acts as a backup for the failed uploads to be uploaded again to s3. Goto s3sync directory $ cd ~/s3sync $ sudo vim monitor.sh#!/bin/bash ##...........................................................## ## script to upload to S3BUCKET, once the change is detected ## ##...........................................................#### AWS Credentials required for s3sync ## export AWS_ACCESS_KEY_ID=ABCDEFGHSGJBKHKDAKS export AWS_SECRET_ACCESS_KEY=jhhvftGFHVgs/bagFVAdbsga+vtpmefLOd export SSL_CERT_DIR=/etc/ssl/certs#echo "Running monitor.sh!" echo "[INFO] File or directory modified = $1 "## Read arguments PASSED=$1# Declare the watch path and S3 destination path watchPath='/home/ubuntu/data' s3path='bucket-data:'# Trim watch path from PASSED out=${PASSED#$watchPath} outPath=${out#"/"}echo "[INFO] ${PASSED} will be uploaded to the S3PATH : $s3path$outPath"if [ -d "${PASSED}" ] then echo "[SAFEMODE ON] Directory created will not be uploaded, unless a file exists!" elif [ -f "${PASSED}" ] then ruby /home/ubuntu/s3sync/s3cmd.rb --ssl put $s3path$outPath ${PASSED} x-amz-storage-class:REDUCED_REDUNDANCY; #USE s3cmd : File else echo "[ERROR] ${PASSED} is not valid type!!"; exit 1 fiRETVAL=$? [ $RETVAL -eq 0 ] && echo "[SUCCESS] Upload successful! " && if [ -d "${PASSED}" ] then echo "[SAFEMODE ON] ${PASSED} is a directory and its not deleted!"; elif [ -f "${PASSED}" ] then sudo rm -rf ${PASSED}; echo "[SUCCESS] Sync and Deletion successful!"; fi[ $RETVAL -ne 0 ] && echo "[ERROR] Synchronization failed!!"Create a script called re-upload.sh which will upload the failed file uploads.This script ensures that the files that are left over from monitor.sh (failed uploads — this chance is very less. May be 2-4 files/day. — due to various reasons.), will be uploaded to s3 again with the same hierarchy in RRS format.  Post successful upload, deletes the file and hence the directory if empty. Goto s3sync directory. $ cd s3sync $ sudo vim re-upload.sh #!/bin/bash ##.........................................................## ## script to detect failed uploads of other date directories ## and re-try ## ##.........................................................## ## AWS Credentials required for s3sync ## export AWS_ACCESS_KEY_ID=ABHJGDVABU5236DVBJD export AWS_SECRET_ACCESS_KEY=hgvgvjhgGYTfs/I5sdn+fsbfsgLKjs export SSL_CERT_DIR=/etc/ssl/certs # Get the previous date today_date=$(date -d "1 days ago" +%Y%m%d) year=$(date -d "1 days ago" +%Y%m%d|head -c 4|tail -c 4) month=$(date -d "1 days ago" +%Y%m%d|head -c 6|tail -c 2) yday=$(date -d "1 days ago" +%Y%m%d|head -c 8|tail -c 2) # Set the path of data basePath="/home/ubuntu/data" datePath="$year/$month/$yday" fullPath="$basePath/$datePath" echo "Path checked for: $fullPath" # Declare the watch path and S3 destination path watchPath='/home/ubuntu/data' s3path='bucket-data:' # check for left files (failed uploads) if [ "$(ls -A $fullPath)" ]; then for i in `ls -a $fullPath/*.*` do echo "Left over file: $i"; if [ -f "$i" ] then out=${i#$watchPath}; outPath=${out#"/"}; echo "Uploading to $s3path/$outPath"; ruby /home/ubuntu/s3sync/s3cmd.rb --ssl put $s3path$outPath $i x-amz-storage-class:REDUCED_REDUNDANCY; #USE s3cmd : File RETVAL=$? [ $RETVAL -eq 0 ] && echo "[SUCCESS] Upload successful! " && sudo rm -rf $i && echo "[SUCCESS] Deletion successful!" [ $RETVAL -ne 0 ] && echo "[ERROR] Upload failed!!" else echo "[CLEAN] no files exist!!"; exit 1 fi done else echo "$fullPath is empty"; sudo rm -rf $fullPath; echo "Successfully deleted $fullPath" exit 1 fi # post failed uploads -- delete empty dirs if [ "$(ls -A $fullPath)" ]; then echo "Man!! Somethingz FISHY! All (failed)uploaded files will be deleted. Are there files yet!??"; echo "Man!! I cannot delete it then! Please go check $fullPath"; else echo "$fullPath is empty after uploads"; sudo rm -rf $fullPath; echo "Successfully deleted $fullPath" fiNow, more dirtiest work — Logging and cleaning logs.All the “echo” created in monitor.sh can be found in ~/.watcher/watcher.log when the watcher.py is running. This log helps us initially and may be later too, to backtrack errors or so. Call of duty – Janitor for cleaning logs. To do this, we can use cron to run a script at sometime. I was interested to run – Every Saturday at 8.00 AM Create a script to clean log as “clean_log.sh” in /home/ubuntu/s3syncTime for cron $ crontab -e Add the following lines at the end and save. # EVERY SATURDAY 8:00AM clean watcher log 0 8 * * 6 sudo sh /home/ubuntu/s3sync/clean_log.sh # EVERYDAY at 10:00AM check failed uploads of previous day 0 10 * * * sudo sh /home/ubuntu/s3sync/re-upload.sh All set! logging clean happens every Saturday 8.00 AM and re-upload script runs for the previous day, to check if files exist and does the cleaning accordingly.Let’s start the script Goto Watcher repository $ cd ~/Watcher $ sudo python watcher.py start This will create ~/.watcher directory and has watcher.log in it, when started.So, this assures successful uploads  to S3.   Reference: Monitoring S3 uploads for a real time data from our JCG partner Swathi V at the * Techie(S)pArK * blog. ...
spring-interview-questions-answers

Spring: To autowire or not to autowire

Since using Spring 2.5, I switched from the XML-based application context to the annotations. Although I find those very useful and huge time savers, I’ve always had the feeling that I was losing something in term of flexibility. In particular the @Autowired annotation – or the standard @Inject – felt to me like the new “new”, increasing the coupling between my classes and making it harder to change implementation when needed. I still feel that way a bit, but I’ve learned an interesting pattern to limit the problem when it comes to testing my code, i.e. when I want to replace the real implementation of a bean for a mock. Let’s illustrate with an example. I want to build an application to find interesting stuff on the web for me. I will start with a service which takes a URL and bookmarks it when if it’s a new one which happens to be interesting. Until recently, I may have coded something like this: @Named public class AwesomenessFinder {@Inject private BlogAnalyzer blogAnalyzer;@Inject private BookmarkService bookmarkService;public void checkBlog(String url) { if (!bookmarkService.contains(url) && blogAnalyzer.isInteresting(url)) { bookmarkService.bookmark(url); } } } This is bad, can you see why? If not, keep reading, I hope you will learn something useful today. Because I’m conscientious, I want to create unit tests for this code. Hopefully my algorithm is fine but I want to make sure it won’t bookmark boring blogs or bookmark the same URL twice. That’s where the problems appear, I want to isolate the AwesomenessFinder from its dependencies. If I was using an XML configuration, I could simply inject a mock implementation in my test context, can I do it with the annotations? Well, yes! There’s a way, with the @Primary annotation. Let’s try creating mock implementations for BlogAnalyzer and BookmarkService. @Named @Primary public class BlogAnalyzerMock implements BlogAnalyzer {public boolean isInteresting(String url) { return true; } }@Named @Primary public class BookmarkServiceMock implements BookmarkService {Set bookmarks = new HashSet();public boolean contains(String url) { return bookmarks.contains(url); }public void bookmark(String url) { bookmarks.add(url); } } Because I use Maven and I put those mocks in the test/java directory, the main application won’t see them and will inject the real implementations. On the other hand, the unit tests will see 2 implementations. The @Primary is required to prevent an exception like: org.springframework.beans.factory.NoSuchBeanDefinitionException: No unique bean of type [service.BlogAnalyzer] is defined: expected single matching bean but found 2: [blogAnalyzerMock, blogAnalyzerImpl] Now, I can test my algorithm: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = "classpath:application-context.xml") public class AwesomenessFinderTest {@Inject private AwesomenessFinder awesomenessFinder;@Inject private BookmarkService bookmarkService;@Test public void checkInterestingBlog_bookmarked() { String url = "http://www.javaspecialists.eu"; assertFalse(bookmarkService.contains(url)); awesomenessFinder.checkBlog(url); assertTrue(bookmarkService.contains(url)); } } Not bad, I tested the happy path, an interesting blog gets bookmarked. Now how do I go about testing the other cases. Of course I can add some logic in my mocks to find certain URLs already bookmarked or not interesting, but it would become clunky. And this is a very simple algorithm, imagine how bad it would get to test something more complex. There is a better way which requires to redesign my class and the way the dependencies are injected. Here is how: @Named public class AwesomenessFinder {private BlogAnalyzer blogAnalyzer;private BookmarkService bookmarkService;@Inject public AwesomenessFinder(BlogAnalyzer blogAnalyzer, BookmarkService bookmarkService) { this.blogAnalyzer = blogAnalyzer; this.bookmarkService = bookmarkService; }public void checkBlog(String url) { if (!bookmarkService.contains(url) && blogAnalyzer.isInteresting(url)) { bookmarkService.bookmark(url); } } } Note that I still autowire my dependencies with the @Inject annotation, so the callers of my AwesomenessFinder won’t be affected. For example, the following in a client class will still work: @Inject private AwesomenessFinder awesomenessFinder; However, the big difference is that I autowire at the constructor level, which gives me a clean way to inject mock implementations. And, since we’re mocking, let’s use a mocking library. Last year, I wrote a post about mockito where I used ugly setters to inject my mocks. With the technique mentioned here, I don’t need to expose my dependencies anymore, I get a much better encapsulation. Here is what the updated test case looks like: public class AwesomenessFinderTest {@Test public void checkInterestingBlog_bookmarked() { BookmarkService bookmarkService = mock(BookmarkService.class); when(bookmarkService.contains(anyString())).thenReturn(false);BlogAnalyzer blogAnalyzer = mock(BlogAnalyzer.class); when(blogAnalyzer.isInteresting(anyString())).thenReturn(true);AwesomenessFinder awesomenessFinder = new AwesomenessFinder(blogAnalyzer, bookmarkService);String url = "http://www.javaspecialists.eu"; awesomenessFinder.checkBlog(url);verify(bookmarkService).bookmark(url); } } Note that this is now plain java, there is no need to use Spring to inject the mocks. Also, the definition of those mock is in the same place than their usage which eases the maintenance. To go a step further, let’s implement other test cases. To avoid code duplication we’ll refactor the test class and introduce some enums to keep the test cases as expressive as possible. public class AwesomenessFinderTest {private enum Knowledge {KNOWN, UNKNOWN};private enum Quality {INTERESTING, BORING};private enum ExpectedBookmark {STORED, IGNORED}private enum ExpectedAnalysis {ANALYZED, SKIPPED}@Test public void checkInterestingBlog_bookmarked() { checkCase(Knowledge.UNKNOWN, Quality.INTERESTING, ExpectedBookmark.STORED, ExpectedAnalysis.ANALYZED); }@Test public void checkBoringBlog_ignored() { checkCase(Knowledge.UNKNOWN, Quality.BORING, ExpectedBookmark.IGNORED, ExpectedAnalysis.ANALYZED); }@Test public void checkKnownBlog_ignored() { checkCase(Knowledge.KNOWN, Quality.INTERESTING, ExpectedBookmark.IGNORED, ExpectedAnalysis.SKIPPED); }private void checkCase(Knowledge knowledge, Quality quality, ExpectedBookmark expectedBookmark, ExpectedAnalysis expectedAnalysis) {BookmarkService bookmarkService = mock(BookmarkService.class); boolean alreadyBookmarked = (knowledge == Knowledge.KNOWN) ? true : false; when(bookmarkService.contains(anyString())).thenReturn(alreadyBookmarked);BlogAnalyzer blogAnalyzer = mock(BlogAnalyzer.class); boolean interesting = (quality == Quality.INTERESTING) ? true : false; when(blogAnalyzer.isInteresting(anyString())).thenReturn(interesting);AwesomenessFinder awesomenessFinder = new AwesomenessFinder(blogAnalyzer, bookmarkService);String url = "whatever"; awesomenessFinder.checkBlog(url);if (expectedBookmark == ExpectedBookmark.STORED) { verify(bookmarkService).bookmark(url); } else { verify(bookmarkService, never()).bookmark(url); }if (expectedAnalysis == ExpectedAnalysis.ANALYZED) { verify(blogAnalyzer).isInteresting(url); } else { verify(blogAnalyzer, never()).isInteresting(url); } } } Last but not least, a nice bonus to the injection by constructor is the capacity to have all the dependencies of a class in the same place (the constructor). If the list of dependencies grow beyond control, you get a very obvious code smell with the size of the constructor. It’s a sign that you certainly have more than one responsibility in your class and you should split it into multiple classes, easier to isolate for unit testing.   Reference: To autowire or not to autowire from our JCG partner Damien Lepage at the Programming and more blog. ...
spring-interview-questions-answers

Spring Social Twitter Setup

In the first part of this series, we looked at how we could consume the StackExchange REST API in order to retrieve its top questions. This second part will focus on setting up the support necessary to interact with the Twitter REST APIs using the Spring Social Twitter project. The end goal is to be able to tweet these questions, two per day, on several accounts, each focused on a single topic. 1. Using Spring Social Twitter The required dependencies necessary to use the Spring Social Twitter project are straightforward. First, we define spring-social-twitter itself:   <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-twitter</artifactId> <version>1.0.3.RELEASE</version> </dependency> Then, we need to override some of it’s dependencies with more up-to-date versions: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.2.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>3.2.2.RELEASE</version> </dependency> <dependency> <artifactId>jackson-mapper-asl</artifactId> <groupId>org.codehaus.jackson</groupId> <version>1.9.12</version> </dependency> Both spring-core and spring-web are defined as dependencies by spring-social-twitter but with older versions – 3.0.7.RELEASE and 3.1.0.RELEASE respectivelly. Overriding these in our own pom ensures that the project is using the up-to-date versions that we have defined instead of these older inherited versions. 2. Creating a Twitter Application This usecase – tweeting on a personal account and not on behalf of other users on their accounts, is a simple one. The fact that it is simple allows us to dispense with most of the OAuth orchestration necessary if the application would need to tweet for multiple users, on each of their twitter accounts. So, for our usecase, we will create the TwitterTemplate directly, as we can manually set up everything we need to do so. First thing we need is a dev application – one can be created here, after logging in. After creating the application, we will have a Consumer Key and Consumer Secret – these are obtained from the page of the Application – on the Details tab, under OAuth settings. Also, in order to allow the Application to tweet on the account, Read and Write Access must be set to replace the default Read only privileges. 3. Provisioning a TwitterTemplate Next, the TwitterTemplate requires an Access Token and an Access Token Secret to be provisioned. These can also be generated from the Application page – under the Details tab – Create my access token. Both the Access Token and the Secret can then be retrieved from under the OAuth tool tab. New ones can always be regenerated on the Details tab, via Recreate my access token action. At this point we have everything we need – the Consumer Key and Consumer Secret, as well as the Access Token and Access Token Secret – which means we can go ahead and create our TwitterTemplate for that application: new TwitterTemplate(consumerKey, consumerSecret, accessToken, accessTokenSecret); 4. One Template per Account Now that we have seen how to create a single TwitterTemplate for a single account, we can look back at our usecase again – we need to tweet on several accounts – which means we need several TwitterTemplate instances. These can be easily created on request, with a simple mechanism: @Component public class TwitterTemplateCreator { @Autowired private Environment env; // public Twitter getTwitterTemplate(String accountName) { String consumerKey = env.getProperty(accountName + ".consumerKey"); String consumerSecret = env.getProperty(accountName + ".consumerSecret"); String accessToken = env.getProperty(accountName + ".accessToken"); String accessTokenSecret = env.getProperty(accountName + ".accessTokenSecret"); Preconditions.checkNotNull(consumerKey); Preconditions.checkNotNull(consumerSecret); Preconditions.checkNotNull(accessToken); Preconditions.checkNotNull(accessTokenSecret); // TwitterTemplate twitterTemplate = new TwitterTemplate(consumerKey, consumerSecret, accessToken, accessTokenSecret); return twitterTemplate; } } The four security artifacts are of course externalized in a properties file, by account; for example, for the SpringAtSO account: SpringAtSO.consumerKey=nqYezCjxkHabaX6cdte12g SpringAtSO.consumerSecret=7REmgFW4SnVWpD4EV5Zy9wB2ZEMM9WKxTaZwrgX3i4A SpringAtSO.accessToken=1197830142-t44T7vwgmOnue8EoAxI1cDyDAEBAvple80s1SQ3 SpringAtSO.accessTokenSecret=ZIpghEJgFGNGQZzDFBT5TgsyeqDKY2zQmYsounPafE This allows for a good mix of flexibility and safety – the security credentials are not part of the codebase (which is opensource) but live independently on the filesystem and are picked up by Spring and available in the Spring Enviroment via a simple configuration: @Configuration @PropertySource({ "file:///opt/stack/twitter.properties" }) public class TwitterConfig { // } Properties in Spring are a subject that has been discussed before, so we won’t go into further details on this subject here. Finally, a test will verify that an account has the necessary security information readily available in the Spring Environment; if the properties are not present, the getTwitterTemplate logic should fail the test with a NullPointerException: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = { TwitterConfig.class }) public class TwitterTemplateCreatorIntegrationTest { @Autowired private TwitterTemplateCreator twitterTemplateCreator; // @Test public void givenValidAccountSpringAtSO_whenRetrievingTwitterClient_thenNoException() { twitterTemplateCreator.getTwitterTemplate(SimpleTwitterAccount.SpringAtSO.name()); } } 5. Tweeting With the TwitterTemplate created, let’s turn to the actual action of tweeting. For this, we’ll use a very simple service, accepting a TwitterTemplate and using its underlying API to create a tweet: @Service public class TwitterService { private Logger logger = LoggerFactory.getLogger(getClass()); // public void tweet(Twitter twitter, String tweetText) { try { twitter.timelineOperations().updateStatus(tweetText); } catch (RuntimeException ex) { logger.error("Unable to tweet" + tweetText, ex); } } } 6. Testing the TwitterTemplate And finally, we can write an integration test to perform the entire process of provisioning a TwitterTemplate for an account and tweeting on that account: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = { TwitterConfig.class }) public class TweetServiceLiveTest { @Autowired private TwitterService twitterService; @Autowired private TwitterTemplateCreator twitterCreator; // // tests @Test public void whenTweeting_thenNoExceptions() { Twitter twitterTemplate = twitterCreator.getTwitterTemplate("SpringAtSO"); twitterService.tweet(twitterTemplate, "First Tweet"); } } 7. Conclusion At this point, the Twitter API we have created is completely separate from the StackExchange API and can be used independent of that particular usecase, to tweet anything. The next logical step in the process of tweeting questions from Stack Exchange accounts is to create a component – interacting with both the Twitter and StackExchange APIs that we have presented so far – this will be the focus of the next article in this series.   Reference: Spring Social Twitter Setup from our JCG partner Eugen Paraschiv at the baeldung blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close