Featured FREE Whitepapers

What's New Here?


Java EE6 CDI, Named Components and Qualifiers

One of the biggest promises java EE6 made, was to ease the use of dependency injection. They did, using CDI. CDI, which stands for Contexts and Dependency Injection for Java EE, offers a base set to apply dependency injection in your enterprise application. Before CDI, EJB 3 also introduced dependency injection, but this was a bit basic. You could inject an EJB (statefull or stateless) into another EJB or Servlet (if you container supported this). Offcourse not every application needs EJB’s, that is why CDI is gaining so much popularity. To start, I have made this example. There is a Payment interface, and 2 implementations. A cash payment and a visa payment. I want to be able to choose witch type of payment I inject, still using the same interface: public interface Payment { void pay(BigDecimal amount); }Here are the 2 implementations: public class CashPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(CashPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} cash", amount.toString()); } }public class VisaPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(VisaPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} with visa", amount.toString()); } }To inject the interface we use the @Inject annotation. The annotation does basically what it says. It injects a component, that is available in your application. @Inject private Payment payment;Off course, you saw this coming from a mile away, this won’t work. The container has 2 implementations of our Payment interface, so he does not know which one to inject. Unsatisfied dependencies for type [Payment] with qualifiers [@Default] at injection point [[field] @Inject private be.styledideas.blog.qualifier.web.PaymentBackingAction.payment] So we need some sort of qualifier to point out what implementation we want. CDI offers the @Named Annotation, allowing you to give a name to an implementation. @Named("cash") public class CashPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(CashPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} cash", amount.toString()); } }@Named("visa") public class VisaPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(VisaPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} with visa", amount.toString()); } }When we now change our injection code, we can specify which implementation we need. @Inject private @Named("visa") Payment payment;This works, but the flexibility is limited. When we want to rename our @Named parameter, we have to change it on everyplace where it is used. There is also no refactoring support. There is a beter alternative using Custom made annotations using the @Qualifier annotation. Let us change the code a little bit. First of all, we create new Annotation types. @java.lang.annotation.Documented @java.lang.annotation.Retention(RetentionPolicy.RUNTIME) @javax.inject.Qualifier public @interface CashPayment { }@java.lang.annotation.Documented @java.lang.annotation.Retention(RetentionPolicy.RUNTIME) @javax.inject.Qualifier public @interface VisaPayment { }The @Qualifier annotation that is added to the annotation, makes this annotation discoverable by the container. We can now simply add these annotations to our implementations. @CashPayment public class CashPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(CashPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} cash", amount.toString()); } }@VisaPayment public class VisaPaymentImpl implements Payment { private static final Logger LOGGER = Logger.getLogger(VisaPaymentImpl.class.toString()); @Override public void pay(BigDecimal amount) { LOGGER.log(Level.INFO, "payed {0} with visa", amount.toString()); } }The only thing we now need to do, is change our injection code to @Inject private @VisaPayment Payment payment;When we now change something to our qualifier, we have nice compiler and refactoring support. This also adds extra flexibilty for API or Domain-specific language design. Reference: Java EE6 CDI, Named Components and Qualifiersfrom our JCG partner Jelle Victoor at Styled Ideas Blog. Related Articles :Java EE6 Decorators: Decorating classes at injection time Java EE6 Events: A lightweight alternative to JMS Configuration Management in Java EE Basic EJB References, Injection and Lookup Java Modularity Approaches – Modules, modules, modules Java EE Past, Present, & Cloud 7 Java Tutorials and Android Tutorials list...

Testing an Object’s Internal State with PowerMock

Most unit testing focuses on testing an object’s behaviour in order to prove that it works. This is achieved by writing a JUnit test that calls an object’s public methods and then testing that the return values from these calls match some previously defined set of expected values. This is a very common and successful technique; however, it shouldn’t be forgotten that objects also exhibit state; something that is, by virtue of the fact that it’s hidden, often overlooked. Grady Booch’s 1994 Book Object Oriented Analysis and Design, which I first read in the summer of 1995 defines an object’s state in the following way: The state of an object encompasses all of the (usually static) properties of the object plus the current (usually dynamic) values of each of these properties. He defines the difference between static state and dynamic state using a vending machine example. Static state is exhibited by the way that the machine is always ready to take your money, whilst dynamic state is how much of your money it’s got at any given instance. I suspect that at this point, you’ll quite rightly argue that explicit behavioural tests do test an object’s state by virtue of the fact that a given method call returned the correct result and that to get the correct result the object’s state had to also be correct… and I’ll agree. There are, however, those very few cases where classic behavioural testing isn’t applicable. This occurs when a public method call has no output and does nothing to an object except change its state. An example of this would be a method that returned void or a constructor. For example, given a method with the following signature: public void init();…how do you ensure it’s done its job? It turns out that there are several methods you can use to achieve this…Add lots of getter methods to your class. This is not a particularly good idea, as you’re simply loosening encapsulation by the back door. Relax encapsulation: make private instance variables package private. A very debatable thing to do. You could pragmatically argue that having well tested, correct and reliable code may be better than having a high degree of encapsulation, but I’m not too sure here. This may be a short term fix, but could lead to all kinds of problems in the future and there should be a way of writing well tested, correct and reliable code that doesn’t include breaking an object’s encapsulation Write some code that uses reflection to access an object’s internal state. This is the best idea to date. The down side is that it’s a fair amount of effort and requires a reasonable amount of programming competence. Use PowerMock’s Whitebox testing class to do the hard work for you.The following fully contrived scenario demonstrates the use of PowerMock’s Whitebox class. It takes a very simple AnchorTag <a> class that will build an anchor tag after testing that an input URL string is valid. public class AnchorTag {private static final Logger logger = LoggerFactory.getLogger(AnchorTag.class);/** Use the regex to figure out if the argument is a URL */ private final Pattern pattern = Pattern.compile("^([a-zA-Z0-9]([a-zA-Z0-9\\-]{0,61}[a-zA-Z0-9])?\\.)+[a-zA-Z]{2,6}$");/** * A public method that uses the private method */ public String getTag(String url, String description) {validate(url, description); String anchor = createNewTag(url, description);logger.info("This is the new tag: " + anchor); return "The tag is okay"; }/** * A private method that's used internally, but is complex enough to require testing in its own right */ private void validate(String url, String description) {Matcher m = pattern.matcher(url);if (!m.matches()) { throw new IllegalArgumentException(); } }private String createNewTag(String url, String description) { return "<a href=\"" + url + "\">" + description + "</a>"; } }The URL validation test is done using a regular expression and a Java Pattern object. Using the Whitebox class will ensure that the pattern object is configured correctly and that our AnchorTag is in the correct state. This demonstrated by the JUnit test below: /** * Works for private instance vars. Does not work for static vars. */ @Test public void accessPrivateInstanceVarTest() throws Exception {Pattern result = Whitebox.<pattern> getInternalState(instance, "pattern");logger.info("Broke encapsulation to get hold of state: " + result.pattern()); assertEquals("^([a-zA-Z0-9]([a-zA-Z0-9\\-]{0,61}[a-zA-Z0-9])?\\.)+[a-zA-Z]{2,6}$", result.pattern()); }The crux of this test is the line: Pattern result = Whitebox.<pattern> getInternalState(instance, "pattern");…which uses reflection to return the Pattern object private instance variable. Once we have access to this object, we simply ask it if it has been initialised correctly be calling: assertEquals("^([a-zA-Z0-9]([a-zA-Z0-9\\-]{0,61}[a-zA-Z0-9])?\\.)+[a-zA-Z]{2,6}$", result.pattern());In conclusion I would suggest that using PowerMock to explicitly test an object’s internal state should be used only when you can’t use straight forward classic JUnit test for behavioural testing. Having said that, it is another tool in your toolbox that’ll help you to write better code. Reference: Testing an Object’s Internal State with PowerMock from our JCG partner Roger at Captain Debug’s Blog. Related Articles :Rules in JUnit 4.9 (beta 3) Servlet 3.0 Async Processing for Tenfold Increase in Server Throughput Testing with Scala Java Tools: Source Code Optimization and Analysis Java Tutorials and Android Tutorials list...

Top 10 Java Books you don’t want to miss

We learn by reading books and experimenting on it. So, it is imperative that you choose the best available options. In this post I would like to share my experience with some of the books and how they can help you evolve as a Java Developer. Lets start from the floor, the first 3 books are a good starting point for any Java student. Java Programming Language helps you to get yourself familiar with Java, where Head First will help you stick the Java concepts into your brain, so that you will never forget them. I have chosen Thinking In Java 3rd book in this category but Java the Complete Reference By Herbert Schildt and Java in a nutshell By David Flanagan are good substitutes. These books are more of a reference than a must read. 1. Java Programming Language By Ken Arnold, James Gosling, David Holmes Direct from the creators of the Java, The Java Programming Language is an indispensible resource for novice and advanced programmers alike. Developers around the world have used previous editions to quickly gain deep understanding of the Java programming language, its design goals, and how to use it most effectively in real-world development. The authors systematically conver most classes in Java’s main packages, java.lang.*, java.util, and java.io, presenting in-depth explanations of why these classes work as they do, with informative examples. Several new chapters and major sections have been added, and every chapter has been updated to reflect today’s best practices for building robust, efficient, and maintainable Java software. Above are extracts from the book index page. 2. Head First Java By Kathy Sierra, Bert Bates Its unique approach not only shows you what you need to know about Java syntax, it enables and encourages you to think like a Java programmer. Mastering object oriented programming requires a certain way of thinking, not just a certain way of writing code. The latest research in cognitive science, neurobiology, and educational psychology shows that learning at the deeper levels takes a lot more than text on a page. Actively combining words and pictures not only helps in understanding the subject, but in remembering it. According to some studies, an engaging, entertaining, image-rich, conversational approach actually teaches the subject better. Head First Java puts these theories into practice with a vengeance. Above lines are copied from Google books read more here. 3. Thinking In Java By Bruce Eckel Eckel introduces all the basics of objects as Java uses them, then walks carefully through the fundamental concepts underlying all Java programming — including program flow, initialization and cleanup, implementation hiding, reusing classes, and polymorphism. Using extensive, to-the-point examples, he introduces exception handling, Java I/O, run-time type identification, and passing and returning objects. Eckel also provides an overview of the Key Technology of the Java2 Enterprise Edition platform (J2EE). Above lines are copied from Google books read more here. I am not a big fan of SCJP Exam, but A Programmer’s Guide to Java SCJP Certification is much more than a certification guide. It gives you an insight in to Java, the tips and tricks. SCJP Sun Certified Programmer for Java 5 Study Guide By Kathy Sierra, Bert Bates is a go to book if you are mad about SCJP. Better to read these books than spending time in reading question dumps, these books will help you much more than clearing the exam in your career. 4. A Programmer’s Guide to Java SCJP Certification: A Comprehensive Primer By Khalid Azim Mughal, Rolf Rasmussen This book will help you prepare for and pass the Sun Certified Programmer for the Java Platform SE 6 (CX-310-065) Exam. It is written for any experienced programmer (with or without previous knowledge of Java) interested in mastering the Java programming language. It contains in-depth explanations of the language features. Their usage is illustrated by way of code scenarios, as required by the exam. Numerous exam-relevant review questions to test your understanding of each major topic, with annotated answers Programming exercises and solutions at the end of each chapter Copious code examples illustrating concepts, where the code has been compiled and thoroughly tested on multiple platforms Program output demonstrating expected results from running the examples Extensive use of UML (Unified Modelling Language) for illustration purposes Above lines are copied from Google books read more here. OK, so you got to know Java and been working in it for couple of years its time to take the next step. Everything in this world has good and bad. Java language if not used the way is supposed to be, can make your life miserable. When you write code, its written for future. Writing good Java code is an art that needs lot more skill than knowledge of basic Java. Here I would like to introduce the next set of 4 books that can make you a master in the trade. The Pragmatic Programmer is not really a Java book but is a self help book for any programmer. It is a great book covering various aspects of software development and is capable in transforming you to a Pragmatic Programmer. 5. The Pragmatic Programmer, From Journeyman To Master By Andrew Hunt, David Thomas Written as a series of self-contained sections and filled with entertaining anecdotes, thoughtful examples, and interesting analogies, The Pragmatic Programmer illustrates the best practices and major pitfalls of many different aspects of software development. Whether you’re a new coder, an experienced programmer, or a manager responsible for software projects, use these lessons daily, and you’ll quickly see improvements in personal productivity, accuracy, and job satisfaction. You’ll learn skills and develop habits and attitudes that form the foundation for long-term success in your career. You’ll become a Pragmatic Programmer. Above lines are copied from Google books read more here. So, we wrote code. It is time to add some style. The elements of Java style is one of the earliest documentation on the style part of Java including its various aspects. 6. The elements of Java style By Scott Ambler, Alan Vermeulen Many books explain the syntax and basic use of Java; however, this essential guide explains not only what you can do with the syntax, but what you ought to do. While illustrating these rules with parallel examples of correct and incorrect usage, the authors offer a collection of standards, conventions, and guidelines for writing solid Java code that will be easy to understand, maintain, and enhance. Java developers and programmers who read this book will write better Java code, and become more productive as well. Above lines are copied from Google books read more here. Now, we know how to write code in style. But is it best is class? Does it uses the best practices? Effective Java is one of the best book on best practices is a favourite book for many Java developers. 7. Effective Java By Joshua Bloch Joshua brings together seventy-eight indispensable programmer’s rules of thumb: working, best-practice solutions for the programming challenges you encounter every day. Bloch explores new design patterns and language idioms, showing you how to make the most of features ranging from generics to enums, annotations to autoboxing. Each chapter in the book consists of several “items” presented in the form of a short, standalone essay that provides specific advice, insight into Java platform subtleties, and outstanding code examples. The comprehensive descriptions and explanations for each item illuminate what to do, what not to do, and why. Above lines are copied from Google books read more here Then, you know the good, it is time for the bad stuff. Bitter Java is one of the first book to bring up the Anti-patters in Java. There are various articles and books on Anti-patterns and code smells and is an area where there is lots of space to learn. There are many other books on this topic I am adding this book as a starting point. 8. Bitter Java By Bruce Tate Intended for intermediate Java programmers, analysts, and architects, this guide is a comprehensive analysis of common server-side Java programming traps (called anti-patterns) and their causes and resolutions. Based on a highly successful software conference presentation, this book is grounded on the premise that software programmers enjoy learning not from successful techniques and design patterns, but from bad programs, designs, and war stories — bitter examples. These educational techniques of graphically illustrating good programming practices through negative designs and anti-patterns also have one added benefit: they are fun. Above lines are copied from Google books read more here Many say you need to know Design Patterns, if you want grow as a developer. So I thought of mentioning the best Design pattern book that I have read. It is not a reference book nor it contains the patters catalogue but the book explains the Object Oriented Design Principles that are as important as the patters. Use the book Design patterns: elements of reusableobject-oriented software if you are looking for a reference book. 9. Head First design patterns By Eric Freeman, Elisabeth Freeman, Kathy Sierra, Bert Bates You know you don’t want to reinvent the wheel (or worse, a flat tire), so you look to Design Patterns–the lessons learned by those who’ve faced the same problems. With Design Patterns, you get to take advantage of the best practices and experience of others. Using the latest research in neurobiology, cognitive science, and learning theory, Head First Design Patterns will load patterns into your brain in a way that sticks. In a way that lets you put them to work immediately. In a way that makes you better at solving software design problems, and better at speaking the language of patterns with others on your team. Above lines are copied from Google books read more here If your are a master at coding and designing application using Java its time to crack the JVM. I have read that ‘The Java language specification’ is the best book to do that. I have not got the patience or skill to read the book but is an interesting pick if you want to cross the line. 10. The Java language specification The book provides complete, accurate, and detailed coverage of the Java programming language. It provides full coverage of all new features added in since the previous edition including generics, annotations, asserts, autoboxing, enums, for each loops, variables, methods and static import clauses. Above are extracts from the book index page. In these web-years online resources may be more reachable than books, but I fell these books will help in tuning you to a better Java programmer. Reference: Top 10 Java Books you don’t want to miss. from our JCG partner Manu PK at the The Object Oriented Life blog Related Articles :Java Developer Most Useful Books Java EE Past, Present, & Cloud 7 Services, practices & tools that should exist in any software development house, part 2 Those evil frameworks and their complexity Real modular web applications: Why there is no standard for developing them? Programming antipatterns Java Tutorials and Android Tutorials list...

Parametrizing custom validator in JSF 2

Writing a custom validator in JSF 2 is not a complicated task. You implement Validator interface, add @FacesValidator annotation and insert validator declaration in faces-config.xml, that’s all. A piece of cake. But let’s consider following scenario: You need custom date validator, let’s say checking that date from rich:calendar is not in the past. So we place calendar component with validator inside. <rich:calendar value="#{fieldValue}" id="dateField" datePattern="yyyy/MM/dd"> <f:validator validatorId="dateNotInThePast"/> </rich:calendar>And our validator could look like below: @FacesValidator("dateNotInThePast") public class DateNotInThePastValidator implements Validator {@Override public void validate(FacesContext facesContext, UIComponent uiComponent, Object value) throws ValidatorException { if (ObjectUtil.isNotEmpty(value)) { checkDate((Date)value, uiComponent, facesContext.getViewRoot().getLocale()); } }private void checkDate(Date date, UIComponent uiComponent, Locale locale) { if(isDateInRange(date) == false) { ResourceBundle rb = ResourceBundle.getBundle("messages", locale); String messageText = rb.getString("date.not.in.the.past"); throw new ValidatorException(new FacesMessage(FacesMessage.SEVERITY_ERROR, messageText, messageText)); } }private boolean isDateInRange(Date date) { Date today = new DateTime().withTime(0, 0, 0, 0).toDate(); return date.after(today) || date.equals(today); } }And if we provide key value in properties file we will see something like this:So it looks that we have working and production-ready custom validator. The problem But while our form becomes more and more complex we might encouter issue described on the screen below:So the problem is how user can determine which date is valid and which is not? Our validator uses the same property key to display both error messages. The solution We need to somehow provide a label of validated field to our custom validator. And, suprisingly for JSF, it can be achieved pretty easily. The only catch is that you have to know how to do it So in Java Server Faces we could parametrize components with attributes (f:attribute tag). So we add attribute to rich:calendar and then read this passed value value inside validator assigned to this calendar field. So now our calendar components should look like that: <rich:calendar value="#{fieldValue}" id="dateField" datePattern="yyyy/MM/dd"> <f:validator validatorId="dateNotInThePast"/> <f:attribute name="fieldLabel" value="Date field 2" /> </rich:calendar>And in our validator Java class we could get this value using uiComponent.getAttributes().get(“fieldLabel”); private void checkDate(Date date, UIComponent uiComponent, Locale locale) { if(isDateInRange(date) == false) { ResourceBundle rb = ResourceBundle.getBundle("messages", locale); String messageText = getFieldLabel(uiComponent) +" " + rb.getString(getErrorKey());throw new ValidatorException(new FacesMessage(FacesMessage.SEVERITY_ERROR, messageText, messageText)); } }protected String getFieldLabel(UIComponent uiComponent) { String fieldLabel = (String) uiComponent.getAttributes().get("fieldLabel");if(fieldLabel == null) { fieldLabel = "Date" ; }return fieldLabel; }Our property value for error should have value can not be in the past as Date or field label will be added at the beginning of error message. And working example should show something similar to this screen:Reference: Parametrizing custom validator in JSF 2 from our JCG partner Tomasz Dziurko at the Code Hard Go Pro blog Related Articles :Java EE Past, Present, & Cloud 7 JBoss AS 7.0.2 “Arc” released – Playing with bind options Those evil frameworks and their complexity Real modular web applications: Why there is no standard for developing them? Programming antipatterns Java Tutorials and Android Tutorials list...

Vaadin add-ons and Maven

Introduction One (of the many) thing I like about Vaadin, is its community of ‘add-ons’ to the Vaadin framework – what they call the Vaadin Directory. An ‘add-on’ is a community-contributed addition to the framework, and can be anything from for example a new client-side widget to a Lazy loading container for a data table. Something similar I would definitely like to see for Activiti! Vaadin widgets are basically precompiled GWT widgets. GWT widgets on itself are Java classes, but the GWT compiler compiles them to Javascript that works accross all browser. So, when you want to use a certain add-on (that has new client-side visuals) in your Vaadin webapp, you will have to compile them yourself since you must include the new Javascript in your webapp. If you are using the Vaadin Eclipse plugin, all is happy and fine. Just add the add-on jar to your project, and the plugin autodetects and compiles the new widgets. However, when your webapp is built using Maven, it’s not that simple. But throwing out Maven and manually copying all your dependency jars is not necessary at all. It’s 2011 after all. The default way of doing the GWT compilation in Maven is, in my opinion, not efficient. So let me guide you through what setup works the best for me and how I tweaked the Maven pom.xml. For the impatient-ones: check the source on github: https://github.com/jbarrez/vaadin-mvn-addon Creating a new Vaadin webapp with Maven This step is well-documented, just check the Vaadin wiki. Short version: use following archetype: mvn archetype:generate -DarchetypeGroupId=com.vaadin-DarchetypeArtifactId=vaadin-archetype-clean -DarchetypeVersion=6.5.6 -DgroupId=com.jorambarrez -DartifactId=vaadin-mvn-addon -Dversion=1.0 -Dpackaging=warAdd the add-on In this example webapp, I’m going to use two cool Vaadin add-ons:Paperstack: a container that allows to display components as pages of a book Refresher: a client side component that polls the server for UI changesBoth add-ons have new client side widgets, so a run through the GWT compiler is definitely needed. Tweak pom.xml Open up the pom.xml. The archetype already generated all you need to work with custom add-ons. Look for commented sections, and just uncomment them. That’s all there is. Create the webapp The following Vaadin webapp shows a simple use of these two components. We’ll just display ‘Activiti’, with each character on a new page of the paperstack component. We also have a button, that will auto-flip through the pages using the Refresher component and a server-side thread:public class MyVaadinApplication extends Application {private static final String DISPLAYED_WORD = "ACTIVITI";private Window window; private Refresher refresher; private Button goButton; private PaperStack paperStack;@Override public void init() { window = new Window("My Vaadin Application"); setMainWindow(window);initGoButton(); initPaperStack(); }private void initGoButton() { goButton = new Button("Flip to the end"); window.addComponent(goButton);goButton.addListener(new ClickListener() { public void buttonClick(ClickEvent event) { goButton.setEnabled(false); startRefresher(); startPageFlipThread(); } }); }private void startRefresher() { refresher = new Refresher(); window.addComponent(refresher); refresher.setRefreshInterval(100L); }private void startPageFlipThread() { Thread thread = new Thread(new Runnable() { public void run() { goButton.setEnabled(false); int nrOfUpdates = DISPLAYED_WORD.length() - 1; while (nrOfUpdates >= 0) { paperStack.navigate(true); nrOfUpdates--; try { Thread.sleep(2000L); } catch (InterruptedException e) { e.printStackTrace(); } }// Remove refresher when done (for performance) goButton.setEnabled(true); window.removeComponent(refresher); refresher = null; } }); thread.start(); }private void initPaperStack() { paperStack = new PaperStack(); window.addComponent(paperStack);for (int i=0; i<DISPLAYED_WORD.length(); i++) { VerticalLayout verticalLayout = new VerticalLayout(); verticalLayout.setSizeFull(); paperStack.addComponent(verticalLayout);// Quick-hack CSS since I'm to lazy to define a styles.css Label label = new Label("<div style=\"text-align:center;color:blue;font-weight:bold;font-size:100px;text-shadow: 5px 5px 0px #eee, 7px 7px 0px #707070;\">" + DISPLAYED_WORD.charAt(i) + "</div>", Label.CONTENT_XHTML); label.setWidth(100, Label.UNITS_PERCENTAGE); verticalLayout.addComponent(label); verticalLayout.setComponentAlignment(label, Alignment.MIDDLE_CENTER); } }Tweak web.xml To make Vaadin aware of the custom add-ons, add following lines to the Vaadin Application Servlet: <init-param> <param-name>widgetset</param-name> <param-value>com.jorambarrez.CustomWidgetset</param-value> </init-param>Also add a file ‘CustomWidgetset.gwt.xml’ in the package com.jorambarrez (matching whatever you have put in the web.xml). Just copy the following lines, and don’t worry about putting the add-on GWT descriptors there (which would be logical), the maven plugins will find them automatically in the add-on jars. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE module PUBLIC "-//Google Inc.//DTD Google Web Toolkit 1.7.0//EN" "http://google-web-toolkit.googlecode.com/svn/tags/1.7.0/distro-source/core/src/gwt-module.dtd"> <module> <inherits name="com.vaadin.terminal.gwt.DefaultWidgetSet" /> </module>Run the webapp Go to your project, and execute following command: mvn clean package jetty:run This starts up a Jetty webserver and deploys our webapp. You should now be able to play around with the webapp. The problem When running the previous command, you’ll should see the GWT compiler kicking in and compiling the custom widget. The problem is, the GWT compiler does take its time to do its magic (1.30 minutes on my machine, while simply starting Jetty takes 5 seconds). I’m not going to sit and watch the GWT compiler sprinkling pixie dust over my add-ons every frick’n time when I want to run my app. Sure JRebel, could help out a lot here, but it should most definitely not be necessary to have my widgets compiled every time. After all, I’m not changing these add-ons at-all, right. Tweaking pom.xml (The Sequel) So we just learned that the default pom.xml generated by the Vaadin archetype isn’t friendly when it comes to add-ons. If you take a look at the configuration of the GWT compiler plugin, you’ll notice that the compiled widgets are added to the target folder, and not in the sources of your project: <webappDirectory>${project.build.directory}/${project.build.finalName}/VAADIN/widgetsets</webappDirectory> If we change that to our source folder: <webappDirectory>src/main/webapp/VAADIN/widgetsets</webappDirectory> the result of the GWT compilation is put in the source of my webapp. This also means I can just check them in together with the rest of my webapp. The only thing we now need to do, is to make sure we don’t recompile the widgets on every run. I’ve chosen to simply put it in a profile as follows: <profiles> <profile> <id>compile-widgetset</id> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> ....Whenever I now add a new add-on to the project, I now run the following command: mvn -Pcompile-widgetset clean compile and it will compile all add-ons and put the result in my source folders. Running the Jetty webserver as shown above, will now just copy these sources to the war file and boot time is reduced to a minumum again (5 seconds here). Source The whole webapp as described in the above steps in fully available on GitHub: https://github.com/jbarrez/vaadin-mvn-addon Conclusion Using add-ons with Maven is not that hard, all of that is well-documented (as anything in Vaadin). However, the Maven archetype generates a Maven configuration that isn’t that efficient since it recompiles the add-on widgets on every run. The above steps show how to tweak the config to make it more suitable for real rapid development! Any comments or improvements are welcome of course! Reference: How to: Vaadin add-ons and Maven from our JCG partner Joram Barrez at the Small steps with big feet blog Related Articles :Integrating Maven with Ivy OSGi Using Maven with Equinox Java Modularity Approaches – Modules, modules, modules GWT EJB3 Maven JBoss 5.1 integration tutorial Building your own GWT Spring Maven Archetype Java Tutorials and Android Tutorials list...

You can’t be Agile in Maintenance? (Part 2)

This article continues from You can’t be Agile in Maintenance? (Part 1) Coding Guidelines – follow the rules Getting the team to follow coding guidelines is important in maintenance to help ensure the consistency and integrity of the code base over time – and to help ensure software security (PPT). Of course teams may have to compromise on coding standards and style conventions, depending on what they have inherited in the code base; and teams that maintain multiple systems will have to follow different guidelines for each system. Metaphor In XP, teams are supposed to share a Metaphor: a simple high-level expression of the system architecture (the system is a production line, or a bill of materials) and common names and patterns that can be used to describe the system. It’s a fuzzy concept at best (PDF), a weak substitute for more detailed architecture or design, and it’s not of much practical value in maintenance. Maintenance teams have to work with the architecture and patterns that are already in place in the system. What is important is making sure that the team has a common understanding of these patterns and the basic architecture so that the integrity isn’t lost – if it hasn’t been lost already. Getting the team together and reviewing the architecture, or reverse-engineering it, making sure that they all agree on it and documenting it in a simple way is important especially when taking over maintenance of a new system and when you are planning major changes. Simple Design Agile development teams start with simple designs and try to keep them simple. Maintenance teams have to work with whatever design and architecture that they inherit, which can be overwhelmingly complex, especially in bigger and older systems. But the driving principle should still be to design changes and new features as simple as the existing system lets you – and to simplify the system’s design further whenever you can. Especially when making small changes, simple, just-enough design is good – it means less documentation and less time and less cost. But maintenance teams need to be more risk adverse than development teams – even small mistakes can break compatibility or cause a run-time failure or open a security hole. This means that maintainers can’t be as iterative and free to take chances, and they need to spend more time upfront doing analysis, understanding the existing design and working through dependencies, as well as reviewing and testing their changes for regressions afterwards. Refactoring Refactoring takes on a lot of importance in maintenance. Every time a developer makes a change or fix they should consider how much refactoring work they should do and can do to make the code and design clearer and simpler, and to pay off technical debt. What and how much to refactor depends on what kind of work they are doing (making a well-thought-out isolated change, or doing shotgun surgery, or pushing out an emergency hot fix) and the time and risks involved, how well they understand the code, how good their tools are (development IDEs for Java and .NET at least have good built-in tools that make many refactorings simple and safe) and what kind of safety net they have in place to catch mistakes – automated tests, code reviews, static analysis. Some maintenance teams don’t refactor because they are too afraid of making mistakes. It’s a vicious circle – over time the code will get harder and harder to understand and change, and they will have more reasons to be more afraid. Others claim that a maintenance team is not working correctly if they don’t spend at least 50% of their time refactoring (PDF). The real answer is somewhere in between – enough refactoring to make changes and fixes safe. There are cases where extensive refactoring, restructuring or rewriting code is the right thing to do. Some code is too dangerous to change or too full of bugs to leave the way it is – studies show that in most systems, especially big systems, 80% of the bugs can cluster in 20% of the code. Restructuring or rewriting this code can pay off quickly, reducing problems in production, and significantly reducing the time needed to make changes and test them as you go forward. Continuous Testing Testing is even more important and necessary in maintenance than it is in development. And it’s a major part of maintenance costs. Most maintenance teams rely on developers to test their own changes and fixes by hand to make sure that the change worked and that they didn’t break anything as a side effect. Of course this makes testing expensive and inefficient and it limits how much work the team can do. In order to move fast, to make incremental changes and refactoring safe, the team needs a better safety net, by automating unit and functional tests and acceptance tests. It can take a long time to put in test scaffolding and tools and write a good set of automated tests. But even a simple test framework and a small set of core fat tests can pay back quickly in maintenance, because a lot changes (and bugs) tend to be concentrated in the same parts of the code – the same features, framework code and APIs get changed over and over again, and will need to be tested over and over again. You can start small, get these tests running quickly and reliably and get the team to rely on them, fill in the gaps with manual tests and reviews, and then fill out the tests over time. Once you have a basic test framework in place, developers can take advantage of TFD/TDD especially for bug fixes – the fix has to be tested anyways, so why not write the test first and make sure that you fixed what you were supposed to? Continuous Integration To get Continuous Testing to work, you need a Continuous Integration environment. Understanding, automating and streamlining the build and getting the CI server up and running and wiring in tests and static analysis checks and reporting can take a lot of work in an enterprise system, especially if you have to deal with multiple languages and platforms and dependencies between systems. But doing this work is also the foundation for simplifying release and deployment – frequent short releases means that release and deployment has to be made as simple as possible. Onsite Customer / Product Owner Working closely with the customer to make sure that the team is delivering what the customer needs when the customer needs it is as important in maintenance as it is in developing a new system. Getting a talented and committed Customer engaged is hard enough on a high-profile development project – but it’s even harder in maintenance. You may end up with too many customers with conflicting agendas competing for the team’s attention, or nobody who has the time or ability to answer questions and make decisions. Maintenance teams often have to make compromises and help fill in this role on their own. But it doesn’t all fit…. Kilner’s main point of concern isn’t really with Agile methods in maintenance. It’s with incremental design and development in general – that some work doesn’t fit nicely into short time boxes. Short iterations might work ok for bug fixes and small enhancements (they do), but sometimes you need to make bigger changes that have lots of dependencies. He argues that while Agile teams building new systems can stub out incomplete work and keep going in steps, maintenance teams have to get everything working all at once – it’s all or nothing. It’s not easy to see how big changes can be broken down into small steps that can be fit into short time boxes. I agree that this is harder in maintenance because you have to be more careful in understanding and untangling dependencies before you make changes, and you have to be more careful not to break things. The code and design will sometimes fight the kinds of changes that you need to make, because you need to do something that was never anticipated in the original design, or whatever design there was has been lost over time and any kind of change is hard to make. It’s not easy – but teams solve these problems all the time. You can use tools to figure out how much of a dependency mess you have in the code and what kind of changes you need to make to get out of this mess. If you are going to spend “weeks, months, or even years” to make changes to a system, then it makes sense to take time upfront to understand and break down build dependencies and isolate run-time dependencies, and put in test scaffolding and tests to protect the team from making mistakes as they go along. All of this can be done in time boxed steps. Just because you are following time boxes and simple, incremental design doesn’t mean that you start making changes without thinking them through. Read Working With Legacy Code – Michael Feathers walks through how to deal with these problems in detail, in both object oriented and procedural languages. What to do if it takes forever to make a change. How to break dependencies. How to find interception points and pinch points. How to find structure in the design and the code. What tests to write and how to get automated tests to work. Changing data in a production system, especially data shared with other systems, isn’t easy either. You need to plan out API changes and data structure changes as carefully as possible, but you can still make data and database changes in small, structured steps. To make code changes in steps you can use Branching by Abstraction where it makes sense (like making back-end changes) and you can protect customers from changes through Feature Flags and Dark Launching like Facebook and Twitter and Flickr do to continuously roll out changes – although you need to be careful, because if taken too far these practices can make code more fragile and harder to work with. Agile development teams follow incremental design and development to help them discover an optimal solution through trial-and-error. Maintenance teams work this way for a different reason – to manage technical risks by breaking big changes down and making small bets instead of big ones. Working this way means that you have to put in scaffolding (and remember to take it out afterwards) and plan out intermediate steps and review and test everything as you make each change. Sometimes it might feel like you are running in place, that it is taking longer and costing more. But getting there in small steps is much safer, and gives you a lot more control. Teams working on large legacy code bases and old technology platforms will have a harder time taking on these ideas and succeeding with them. But that doesn’t mean that they won’t work. Yes, you can be Agile in maintenance. Reference: You can’t be Agile in Maintenance? from our JCG partner Jim Bird at the “Building Real Software” blog. Related Articles :Save money from Agile Development Standups – take them or leave them Agile software development recommendations for users and new adopters Breaking Down an Agile process Backlog Not doing Code Reviews? What’s your excuse? Java Tutorials and Android Tutorials list...

You can’t be Agile in Maintenance? (Part 1)

I’ve been going over a couple of posts by Steve Kilner that question whether Agile methods can be used effectively in software maintenance. It’s a surprising question really. There are a lot of maintenance teams who have had success following Agile methods like Scrum and Extreme Programming (XP) (PDF) for some time now. We’ve been doing it for almost 5 years, enhancing and maintaining and supporting enterprise systems, and I know that it works. Agile development naturally leads into maintenance – the goal of incremental Agile development is to get working software out to customers as soon as possible, and get customers using it. At some point, when customers are relying on the software to get real business done and need support and help to keep the system running, teams cross from development over to maintenance. But there’s no reason for Agile development teams to fundamentally change the way that they work when this happens. It is harder to introduce Agile practices into a legacy maintenance team – there are a lot of technical requirements and some cultural changes that need to be made. But most maintenance teams have little to lose and lots to gain from borrowing from what Agile development teams are doing. Agile methods are designed to help small teams deal with a lot of change and uncertainty, and to deliver software quickly – all things that are at least as important in maintenance as they are in development. Technical practices in Extreme Programming especially help ensure that the code is always working – which is even more important in maintenance than it is in development, because the code has to work the first time in production. Agile methods have to be adapted to maintenance, but most teams have found it necessary to adapt these methods to fit their situations anyways. Let’s look at what works and what has to be changed to make Agile methods like Scrum and XP work in maintenance. What works well and what doesn’t Planning Game Managing maintenance isn’t the same as managing a development project – even an Agile development project. Although Agile development teams expect to deal with ambiguity and constant change, maintenance teams need to be even more flexible and responsive, to manage conflicts and unpredictable resourcing problems. Work has to be continuously reviewed and prioritized as it comes in – the customer can’t wait for 2 weeks for you to look at a production bug. The team needs a fast path for urgent changes and especially for hot fixes. You have to be prepared for support demands and interruptions. Structure the team so that some people can take care of second-level support, firefighting and emergency bug fixing and the rest of the team can keep moving forward and get something done. Build slack into schedules to allow for last-minute changes and support escalation. You will also have to be more careful in planning out maintenance work, to take into account technical and operational dependencies and constraints and risks. You’re working in the real world now, not the virtual reality of a project. Standups Standups play an important role in Agile projects to help teams come up to speed and bond. But most maintenance teams work fine without standups – since a lot of maintenance work can be done by one person working on their own, team members don’t need to listen to each other each morning talking about what they did yesterday and what they’re going to do – unless the team is working together on major changes. If someone has a question or runs into a problem, they can ask for help without waiting until the next day. Small releases Most changes and fixes that maintenance teams need to make are small, and there is almost always pressure from the business to get the code out as soon as it is ready, so an Agile approach with small and frequent releases makes a lot of sense. If the time boxes are short enough, the customer is less likely to interrupt and re-prioritize work in progress – most businesses can wait a few days or a couple of weeks to get something changed. Time boxing gives teams a way to control and structure their work, an opportunity to batch up related work to reduce development and testing costs, and natural opportunities to add in security controls and reviews and other gates. It also makes maintenance work more like a project, giving the team a chance to set goals and to see something get done. But time boxing comes with overhead – the planning and setup at the start, then deployment and reviews at the end – all of which adds up over time. Maintenance teams need to be ruthless with ceremonies and meetings, pare them down, keep only what’s necessary and what works. It’s even more important in maintenance than in development to remember that the goal is to deliver working code at the end of each time box. If some code is not working, or you’re not sure if it is working, then extend the deadline, back some of the changes out, or pull the plug on this release and start over. Don’t risk a production failure in order to hit an arbitrary deadline. If the team is having problems fitting work into time boxes, then stop and figure out what you’re doing wrong – the team is trying to do too much too fast, or the code is too unstable, or people don’t understand the code enough – and fix it and move on. Reviews and Retrospectives Retrospectives are important in maintenance to keep the team moving forward, to find better ways of working, and to solve problems. But like many practices, regular reviews reach a point of diminishing returns over time – people end up going through the motions. Once the team is setup, reviews don’t need to be done in each iteration unless the team runs into problems. Schedule reviews when you or the team need them. Collect data on how the team is working, on cycle time and bug report/fix ratios, correlate problems in production with changes, and get the team together to review if the numbers move off track. If the team runs into a serious problem like a major production failure, then get to the bottom of it through Root Cause Analysis. Sustainable pace / 40-hour week It’s not always possible to work a 40-hour week in maintenance. There are times when the team will be pushed to make urgent changes, spend late nights firefighting, releasing after hours and testing on weekends. But if this happens too often or goes on too long the team will burn out. It’s critical to establish a sustainable pace over the long term, to treat people fairly and give them a chance to do a good job. Pairing Pairing is hard to do in small teams where people are working on many different things. Pairing does make sense in some cases – people naturally pair-up when trying to debug a nasty problem or walking through a complicated change – but it’s not necessary to force it on people, and there are good reasons not to. Some teams (like mine) rely more on code reviews instead of pairing, or try to get developers to pair when first looking at a problem or change, and at the end again to review the code and tests. The important thing is to ensure that changes get looked at by at least one other person if possible, however this gets done. Collective Code Ownership Because maintenance teams are usually small and have to deal with a lot of different kinds of work, sooner or later different people will end up working on different parts of the code. It’s necessary, and it’s a good thing because people get a chance to learn more about the system and work with different technologies and on different problems. But there’s still a place for specialists in maintenance. You want the people who know the code the best to make emergency fixes or high-risk changes – or at least have them review the changes – because it has to work the first time. And sometimes you have no choice – sometimes there is only one person who understands a framework or language or technical problem well enough to get something done. The article continues at You can’t be Agile in Maintenance? (Part 2). Reference: You can’t be Agile in Maintenance? from our JCG partner Jim Bird at the “Building Real Software” blog. Related Articles :Save money from Agile Development Standups – take them or leave them Agile software development recommendations for users and new adopters Breaking Down an Agile process Backlog Not doing Code Reviews? What’s your excuse? Even Backlogs Need Grooming Java Tutorials and Android Tutorials list...

Concurrency optimization – Reduce lock granularity

Performance is very important in high load multi-threaded applications. Developers must be aware of concurrency issues in order to achieve better performance. When we need concurrency we usually have a resource that must be shared by two or more threads. In such cases we have a race condition, where only one of the threads will acquire the lock (on resource) and all the other threads that want the lock will block. This synchronization feature does not come for free; both JVM and OS consume resources in order to provide you with a valid concurrency model. The three most fundamental factors that makes concurrency implementation resource intensive are:Context switching Memory synchronization BlockingIn order to write optimized code for synchronization you have to be aware of these 3 factors and how to decrease them. There are many things that you must watch out when writing such code. In this article I will show you a technique to decrease these factors by reducing lock granularity. Starting with the basic rule: Do not hold the lock longer than necessary. Do whatever you need to do before acquire the lock, use the lock only to act on synchronised resource and release it immediately. See a simple example: public class HelloSync { private Map dictionary = new HashMap(); public synchronized void borringDeveloper(String key, String value) { long startTime = (new java.util.Date()).getTime(); value = value + "_"+startTime; dictionary.put(key, value); System.out.println("I did this in "+ ((new java.util.Date()).getTime() - startTime)+" miliseconds"); } }In this example we violate the basic rule, because we create two Date objects, call System.out.println(), and do many String concatenations. The only one action that needs synchronization is action: “dictionary.put(key, value);” Alter the code and move synchronization from method scope to this single line. A slightly better code is this: public class HelloSync { private Map dictionary = new HashMap(); public void borringDeveloper(String key, String value) { long startTime = (new java.util.Date()).getTime(); value = value + "_"+startTime; synchronized (dictionary) { dictionary.put(key, value); } System.out.println("I did this in "+ ((new java.util.Date()).getTime() - startTime)+" miliseconds"); } }Above code can be written even better, but I just want to give you the idea. If wondering how, check java.util.concurrent.ConcurrentHashMap. So, how can we reduce lock granularity? In a short answer, by asking for locks as less as possible. The basic idea is to use separate locks to guard multiple independent state variables of a class, instead of having only one lock in class scope. Check this simple example that I have seen in many applications. public class Grocery { private final ArrayList fruits = new ArrayList(); private final ArrayList vegetables = new ArrayList(); public synchronized void addFruit(int index, String fruit) { fruits.add(index, fruit); } public synchronized void removeFruit(int index) { fruits.remove(index); } public synchronized void addVegetable(int index, String vegetable) { vegetables.add(index, vegetable); } public synchronized void removeVegetable(int index) { vegetables.remove(index); } }The grocery owner can add/remove fruits and vegetables in/from his grocery shop. This implementation of Grocery guards both fruits and vegetables using the base Grocery lock, as the synchronization is done on method scope. Instead of this fat lock, we can use two separate guards, one for each resource (fruits and vegetables). Check the improved code below. public class Grocery { private final ArrayList fruits = new ArrayList(); private final ArrayList vegetables = new ArrayList(); public void addFruit(int index, String fruit) { synchronized(fruits) fruits.add(index, fruit); } public void removeFruit(int index) { synchronized(fruits) {fruits.remove(index);} } public void addVegetable(int index, String vegetable) { synchronized(vegetables) vegetables.add(index, vegetable); } public void removeVegetable(int index) { synchronized(vegetables) vegetables.remove(index); } }After using two guards (splitting the lock), we will see less locking traffic than the original fat lock would have. This technique works better when we apply it on locks that have medium lock contention. If we apply it on locks that have slight contention, then the gain is small, but still positive. If we apply it on locks that have heavy contention, then the result is not always better and you must be aware of this. Please use this technique with conscience. If you suspect that this is a heavy contention lock then please follow these steps:Confirm the traffic of your production requirements, multiple it by 3 or 5 (or even 10 even if you want to be prepared). Run the appropriate tests on your testbed, based on the new traffic. Compare both solutions and only then choose the most appropriate.There are more techniques that can improve synchronization performance, but for all techniques the basic rule is one: Do not hold the lock longer than necessary. This basic rule can be translated to “asking for locks as less as possible” as I have already explained you, or to other translations(solutions) which I will try to describe them in future articles. Two more important advices:Be aware of classes in java.util.concurrent package (and subpackages) as there are very clever and useful implementations. Concurrency code most times can be minimized by using good design patterns. Always have in mind Enterprise Integration Patterns, they can save your nights.Reference: Reduce lock granularity – Concurrency optimization from our JCG partner Adrianos Dadis at Java, Integration and the virtues of source. Related Articles :Java Concurrency Tutorial – Semaphores Java Concurrency Tutorial – Reentrant Locks Java Concurrency Tutorial – Thread Pools Java Concurrency Tutorial – Callable, Future Java Concurrency Tutorial – Blocking Queues Java Concurrency Tutorial – CountDownLatch  Java Fork/Join for Parallel Programming Java Memory Model – Quick overview and things to notice Java Tutorials and Android Tutorials list...

Apache Shiro : Application Security Made Easy

Considering that JAVA is over 10+ years old, the number of choices for application developers that need to build authentication and authorization into their applications is shockingly low. In JAVA & J2EE, the JAAS specification was an attempt to address security. While JAAS works for authentication, the authorization part is just too cumbersome to use. The EJB and Servlet specifications offer coarse grained authorization at a method and resource level. But these are too coarse to be of any use in real world applications. For Spring users, Spring Security is an alternative. But it is a little complicated to use, especially the authorization model. A majority of applications end up building their home grown solutions for authentication and authorization. Apache Shiro is a open source JAVA security framework that addresses this problem. It is an elegant framework that lets you add authentication, authorization and session management to your application with ease. The highlights of Shiro are: It is a pure java framework. It works with all kinds of JAVA applications: J2SE, J2EE, Web, standalone or distributed. It can integrate easily with various repositories that may host user and permissions metadata such as RDBMs, LDAPs. It has a simple and intuitive permissions model that can apply to wide variety of problem domains. It is a model that lets you focus on your problem domain without getting you bogged down in the framework. It has built in support for session management. It has built in support for caching metadata. It integrates very easily with Spring. Same applies to any J2EE application server. Most importantly, it is very easy to use. Most of the time, all you will need to do to integrate Shiro, will be to implement a REALM that ties Shiro to your User and Permissions metadata. Shiro Concepts The SecurityManager encapsulates the security configuration of an application that uses Shiro. Subject is the runtimes view of a user that is using the system. When the subject is created, it is not authenticated. For authentication, the login method must be called, passing in the proper credentials. Session represents the session associated with an authenticated Subject. The session has a session id. Applications can store arbitrary data in the session. The session is valid until the user logs out or the session times out. A permission represents what actions a subject may perform on a resource in the application. Out of the box Shiro supports permissions represented by colon separated tokens. Each token has some logical meaning. For example, my application may define a permission as ResourceType:actions:ResourceInstance. More concretely File:read:contacts.doc represents a permission to read a file contacts.doc. The permission must be associated with a user, to grant that permission to the user. A Role is a collection of permissions that might represent ability to perform some organizational function. Roles make the association between users and permissions more manageable. A Realm abstracts your user, permission and role metadata for Shiro. You make this data available to Shiro by implementing a realm and plugging it into Shiro. Typical realms use either a relational database or LDAP to store user data. Tutorial Let us build a simple java application that does some authentication and authorization. For this tutorial you will need:Apache Shiro A java development environment. I use Eclipse. But you can use other IDEs or command line tools as well. You may download the source code for this example at simpleshiro.zipStep 1: Create a Shiro.ini configuration file We will use the default file base realm that comes with Shiro. This reads the user/permission metadata from the shiro.ini file. In a subsequent tutorial, I will show how to build a realm that gets data from a relational database. In the Ini file, let us define some users and associate some roles to them. # Simple shiro.ini file [users] # user admin with password 123456 and role Administrator admin = 123456, Administrator # user mike with password abcdef and role Reader mike = abcdef, Reader # user joe with password !23abC2 and role Writer joe = !23abC2, Writer # ----------------------------------------------------------------------------- # Roles with assigned permissions [roles] # A permission is modeled as Resourcetype:actions:resourceinstances # Administrator has permission to do all actions on all resources Administrator = *:*:* # Reader has permission to read all files Reader = File:read:* # Writer role has permission to read and write all files Writer = File:read,write:*In the above shiro.ini we have defined 3 users and 3 roles. The permission is modeled as colon separated tokens. Each token can have multiple comma separated parts. Each domain and part grants permission to some application specific domain. Step 2: BootStrap shiro into you application Factory factory = new IniSecurityManagerFactory("classpath:shiro.ini"); SecurityManager securityManager = factory.getInstance(); SecurityUtils.setSecurityManager(securityManager);IniSecurityManagerFactory loads the configuration from shiro.ini and creates a singleton SecurityManager for the application. For simplicity, Our shiro.ini goes with the default SecurityManager configuration which uses a Text based realm and gets user,permission,role metadata from the shiro.ini file. Step 3: Login Subject usr = SecurityUtils.getSubject(); UsernamePasswordToken token = new UsernamePasswordToken("mike", "abcdef"); try { usr.login(token); } catch (AuthenticationException ae) { log.error(ae.toString()) ; return ; } log.info("User [" + usr.getPrincipal() + "] logged in successfully.");SecurityUtils is a factory class for getting an existing subject or creating a new one. Credentials are passed in using an AuthenticationToken. In this case, we want to pass in a username and password and hence use the UsernamePasswordToken. Then we call the login method on the Subject passing in the authentication token. Step 4: Check if the user has permission if (usr.isPermitted("File:write:xyz.doc")) { log.info(usr.getPrincipal() + " has permission to write xyz.doc "); } else { log.info(usr.getPrincipal() + " does not have permission to write xyz.doc "); } if (usr.isPermitted("File:read:xyz.doc")) { log.info(usr.getPrincipal() + " has permission to read xyz.doc "); } else { log.info(usr.getPrincipal() + " does not have permission to read xyz.doc "); }Subject has a isPermitted method that takes a permission string as parameter and returns true/false. Step 5: Logout usr.logout();The logout method logs the user out. To get familiar with Shiro, try changing the UsernamePasswordToken and login as a different user. Check some other permissions. Modify the Shiro.ini file to create new users and roles with different permissions. Run the program a few times with different metadata and different input. In a production environment, you will not want users and roles in an ini file. You want them in a secure repository like a relational database or LDAP. In the next part, I will show you how to build a Shiro Realm that can use user,role, permission metadata from a relational database. Reference: Apache Shiro : Application Security Made Easy by our JCG partner Manoj at the The Khangaonkar Report blog Related Articles:Services, practices & tools that should exist in any software development house, part 2 Securing GWT apps with Spring Security Configuration Management in Java EE Top 25 Most Dangerous Software Errors – 2011 Spring MVC Interceptors Example Google ClientLogin Utility in Java...

GPGPU with Jcuda the Good, the Bad and … the Ugly

In our previous article GPGPU for Java Programming we showed how to setup an environment to execute CUDA from within java code. However the previous article focused only on setting up the environment leaving the subject of parallelism untouched. In this article we will see how we can utilize a GPU do what is doing best: parallel processing. Through this example we will take some metrics and see where GPU processing is stronger or weaker than using a CPU …and of course as the title suggests there is an ugly part at the end. Will start our GPU parallelism exploration with devising an example that will differ from samples found in most of the available GPGPU documentation which is authored primarily by people with a strong graphics or science background. Most of these examples talk about vector additions or some other mathematical construct. Let’s work on an example that somewhat resembles business situations. So let’s start with imagining that we have a list of products each with a code tag and a price, and we would like to apply a 10% overhead to all products that their code is “abc”. We will implement the example first in C to make some performance measurements between CPU and GPU processing. Afterward we will of course implement the same in Java, but we will avoid to make any measurements as they are a bit trickier in Java for we have to take into account things like garbage collection, just in time compilation etc. //============================================================================ // Name : StoreDiscountExample.cu // Author : Spyros Sakellariou // Version : 1.0 // Description : The Good the Bad and the Ugly //============================================================================#include <iostream> #include <sys/time.h> typedef struct { char code[3]; float listPrice; } product;void printProducts(long size, product * myProduct){ printf("Price of First item=%f,%s\n",myProduct[0].listPrice,myProduct[0].code); printf("Price of Second item=%f,%s\n",myProduct[1].listPrice,myProduct[1].code); printf("Price of Middle item=%f,%s\n",myProduct[(size-1)/2].listPrice,myProduct[(size-1)/2].code); printf("Price of Almost Last item=%f,%s\n",myProduct[size-2].listPrice,myProduct[size-2].code); printf("Price of Last item=%f,%s\n",myProduct[size-1].listPrice,myProduct[size-1].code); }float calculateMiliseconds (timeval t1,timeval t2) { float elapsedTime; elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; return elapsedTime; }__global__ void kernel(long size, product *products) { long kernelid = threadIdx.x + blockIdx.x * blockDim.x; while(kernelid < size) { if (products[kernelid].code[0]=='a' && products[kernelid].code[1]=='b' && products[kernelid].code[2]=='c') products[kernelid].listPrice*=1.10; kernelid += blockDim.x * gridDim.x; } }int main( int argc, char** argv) { timeval t1,t2; cudaEvent_t eStart,eStop; float elapsedTime; long threads = 256; long blocks = 1024; long size = 9000000; char *product1 = "abc"; char *product2 = "bcd"; product *myProduct; product *dev_Product;printf("blocks=%d x threads=%d total threads=%d total number of products=%d\n\n",blocks,threads,threads*blocks,size);myProduct=(product*)malloc(sizeof(myProduct)*size); cudaMalloc((void**)&dev_Product,sizeof(dev_Product)*size); cudaEventCreate(&eStart); cudaEventCreate(&eStop); gettimeofday(&t1, NULL); for (long i = 0; i<size; i++){ if (i%2==0) strcpy(myProduct[i].code,product1); else strcpy(myProduct[i].code,product2); myProduct[i].listPrice = i+1; } gettimeofday(&t2, NULL); printf ( "Initialization time %4.2f ms\n", calculateMiliseconds(t1,t2) ); printProducts(size,myProduct); cudaMemcpy(dev_Product,myProduct,sizeof(dev_Product)*size,cudaMemcpyHostToDevice); cudaEventRecord(eStart,0); kernel<<<blocks,threads>>>(size,dev_Product); cudaEventRecord(eStop,0); cudaEventSynchronize(eStop); cudaMemcpy(myProduct,dev_Product,sizeof(dev_Product)*size,cudaMemcpyDeviceToHost); cudaEventElapsedTime(&elapsedTime,eStart,eStop); printf ( "\nCuda Kernel Time=%4.2f ms\n", elapsedTime ); printProducts(size,myProduct); gettimeofday(&t1, NULL); long j=0; while (j < size){ if (myProduct[j].code[0]=='a' && myProduct[j].code[1]=='b' && myProduct[j].code[2]=='c') myProduct[j].listPrice*=0.5; j++; } gettimeofday(&t2, NULL); printf ( "\nCPU Time=%4.2f ms\n", calculateMiliseconds(t1,t2) ); printProducts(size,myProduct); cudaFree(dev_Product); free(myProduct); }In lines 11-14 there is a definition of a structure containing our product with a character array for the product code and a float for its price. In lines 16 and 24 there is the definition of two utility methods one that prints some products (so we see if work has been done) and one to convert raw date differences into milliseconds. Note that using the standard C clock function will not work as its granularity is not enough to measure milliseconds. Line 33 is where our kernel is written. Comparing to the previous article this looks somewhat more complex so lets dissect it further… In line 35 we define a kernelid parameter. This parameter will hold a unique thread id of the thread being executed. CUDA assigns each thread a thread id and block id number that is unique only to its own dimension. In our example we instruct the GPU to launch 256 threads and 1024 blocks, so in effect the GPU will execute 262144 threads. This is GOOD! Although CUDA provides us with the threadIdx.x and blockIdx.x parameters during execution, we need to manually create a unique thread id in order to know which thread we are currently in. The unique thread id needs to start from 0 up to 262143, so we can easily create it by multiplying the number of threads per block to be executed (using the CUDA parameter blockDim.x) with the current block adding it to the current thread, thus: unique thread id = current thread id + current block Id * number of threads per block If you like to read ahead you have already realized that although 262 thousand threads is impressive our data set is made of 9 million items so our threads need to process more than one item at a time. We do this by setting up a loop in line 36 that checks whether our thread id does not overshoot our data array of products. The loop uses the thread id as its index, but we increment it using the following formula: index increment += threads per block * total number of blocks Thus each thread will execute each loop 9million/262thousand times, meaning it will process about 34 items. The rest of the kernel code is pretty simple and self explanatory, whenever we find a product with code “abc” we multiply it by 1.1 (our 10% overhead). Notice that the strcpy function cannot be used inside our kernel. You will get a compilation error if you try it. Not so good! Going into main function in lines 45 and 46 we define two C timers (t1 and t2) and two CUDA event timers (eStart and eStop). We need the CUDA timers because the kernel is executed asynchronously and our kernel function returns instantly and our timers will measure only the time it took to complete the function call. The fact that kernel code returns instantly means that we allow the CPU to do other tasks during GPU code execution. This is also GOOD! The parameters following our timers are self explanatory: we define our number of threads, blocks, the size of the products array etc. The myproduct pointer will be used for CPU processing and the dev_product pointer for GPU processing. In lines 58 to 61 we allocate RAM and GPU memory for myproduct and dev_product and also we create the CUDA timers that will help us measure kernel execution time. In lines 63 to 73 we initialize myproduct with codes and prices and we print the time it took the CPU to complete the task. We also print some sample products from our array to make sure that the job is done correctly. In lines 74 to 85 we copy the products array to the GPU memory, execute our kernel indicating the number of threads and blocks we want to execute, and copy the results back to myproduct array. We print the time it took to execute the kernel and some sample products to make sure that we got the job done correctly again. Finally in lines 88 to 99 we let the CPU do a similar process of what the GPU did, i.e. apply a 50% discount to all products that GPU added overheads to. The we print the time it took to execute the CPU task and print some sample products to make sure the job is done. Let’s compile and run this code: # nvcc StoreDiscountExample.cu -o StoreDiscountExample # ./StoreDiscountExample blocks=1024 x threads=256 total threads=262144 total number of products=9000000Initialization time 105.81 ms Price of First item=1.000000,abc Price of Second item=2.000000,bcd Price of Middle item=4500000.000000,bcd Price of Almost Last item=8999999.000000,abc Price of Last item=9000000.000000,bcdCuda Kernel Time=1.38 ms Price of First item=1.100000,abc Price of Second item=2.000000,bcd Price of Middle item=4500000.000000,bcd Price of Almost Last item=9899999.000000,abc Price of Last item=9000000.000000,bcdCPU Time=59.58 ms Price of First item=0.550000,abc Price of Second item=2.000000,bcd Price of Middle item=4500000.000000,bcd Price of Almost Last item=4949999.500000,abc Price of Last item=9000000.000000,bcd #Wow! It took the GPU 1.38 milliseconds to do what the CPU took 59.58 milliseconds (numbers will vary depending on your hardware of course). This is GOOD! Hold your horses for a second! Before you decide to delete all your code and start re-writing everything in CUDA there is a catch: We omitted something serious and that is to measure how long it takes to copy 9 million records from RAM to GPU memory and back. Here is the code from lines 74 to 85 altered to have timers to measure copying the products list to and from the GPU memory: gettimeofday(&t1, NULL); cudaMemcpy(dev_Product,myProduct,sizeof(dev_Product)*size,cudaMemcpyHostToDevice);cudaEventRecord(eStart,0); kernel<<<blocks,threads>>>(size,dev_Product); cudaEventRecord(eStop,0); cudaEventSynchronize(eStop);cudaMemcpy(myProduct,dev_Product,sizeof(dev_Product)*size,cudaMemcpyDeviceToHost); gettimeofday(&t2, NULL); printf ( "\nCuda Total Time=%4.2f ms\n", calculateMiliseconds(t1,t2)); cudaEventElapsedTime(&elapsedTime,eStart,eStop); printf ( "Cuda Kernel Time=%4.2f ms\n", elapsedTime ); printProducts(size,myProduct);Let’s compile and run this code: # nvcc StoreDiscountExample.cu -o StoreDiscountExample # ./StoreDiscountExample blocks=1024 x threads=256 total threads=262144 total number of products=9000000Initialization time 108.31 ms Price of First item=1.000000,abc Price of Second item=2.000000,bcd Price of Middle item=4500000.000000,bcd Price of Almost Last item=8999999.000000,abc Price of Last item=9000000.000000,bcdCuda Total Time=55.13 ms Cuda Kernel Time=1.38 ms Price of First item=1.100000,abc Price of Second item=2.000000,bcd Price of Middle item=4500000.000000,bcd Price of Almost Last item=9899999.000000,abc Price of Last item=9000000.000000,bcdCPU Time=59.03 ms Price of First item=0.550000,abc Price of Second item=2.000000,bcd Price of Middle item=4500000.000000,bcd Price of Almost Last item=4949999.500000,abc Price of Last item=9000000.000000,bcd#Notice the CUDA Total Time is 55 milliseconds, that’s only 4 milliseconds faster than using the CPU in a single thread. This is BAD! So although the GPU is super-fast when it comes to parallel tasks execution there is a heavy penalty when we copy items to and from RAM and GPU memory. There are some advanced tricks such as direct memory access that can be used but the moral is you have to be very careful when making the decision to use the GPU. If your algorithm requires a lot of data moving then probably GPGPU is not the answer. Since we have completed with our performance tests let’s have a look at how we can implement the same function using jcuda. Here is the code for the java part: import static jcuda.driver.JCudaDriver.*; import jcuda.*; import jcuda.driver.*; import jcuda.runtime.JCuda;public class StoreDiscountExample { public static void main(String[] args) { int threads = 256; int blocks = 1024; final int size = 9000000; byte product1[] = "abc".getBytes(); byte product2[] = "bcd".getBytes(); byte productList[] = new byte[size*3]; float productPrices[] = new float[size]; long size_array[] = {size}; cuInit(0); CUcontext pctx = new CUcontext(); CUdevice dev = new CUdevice(); cuDeviceGet(dev, 0); cuCtxCreate(pctx, 0, dev); CUmodule module = new CUmodule(); cuModuleLoad(module, "StoreDiscountKernel.ptx"); CUfunction function = new CUfunction(); cuModuleGetFunction(function, module, "kernel"); int j=0; for (int i = 0; i<size; i++){ j=i*3; if (i%2==0) { productList[j]=product1[0]; productList[j+1]=product1[1]; productList[j+2]=product1[2]; } else { productList[j]=product2[0]; productList[j+1]=product2[1]; productList[j+2]=product2[2]; } productPrices[i] = i+1; } printSamples(size, productList, productPrices); CUdeviceptr size_dev = new CUdeviceptr(); cuMemAlloc(size_dev, Sizeof.LONG); cuMemcpyHtoD(size_dev, Pointer.to(size_array), Sizeof.LONG); CUdeviceptr productList_dev = new CUdeviceptr(); cuMemAlloc(productList_dev, Sizeof.BYTE*3*size); cuMemcpyHtoD(productList_dev, Pointer.to(productList), Sizeof.BYTE*3*size); CUdeviceptr productPrice_dev = new CUdeviceptr(); cuMemAlloc(productPrice_dev, Sizeof.FLOAT*size); cuMemcpyHtoD(productPrice_dev, Pointer.to(productPrices), Sizeof.FLOAT*size); Pointer kernelParameters = Pointer.to( Pointer.to(size_dev), Pointer.to(productList_dev), Pointer.to(productPrice_dev) ); cuLaunchKernel(function, blocks, 1, 1, threads, 1, 1, 0, null, kernelParameters, null); cuMemcpyDtoH(Pointer.to(productPrices), productPrice_dev, Sizeof.FLOAT*size); printSamples(size, productList, productPrices); JCuda.cudaFree(productList_dev); JCuda.cudaFree(productPrice_dev); JCuda.cudaFree(size_dev); }public static void printSamples(int size, byte[] productList, float[] productPrices) { System.out.print(String.copyValueOf(new String(productList).toCharArray(), 0, 3));System.out.println(" "+productPrices[0]); System.out.print(String.copyValueOf(new String(productList).toCharArray(), 3, 3));System.out.println(" "+productPrices[1]); System.out.print(String.copyValueOf(new String(productList).toCharArray(), 6, 3));System.out.println(" "+productPrices[2]); System.out.print(String.copyValueOf(new String(productList).toCharArray(), 9, 3));System.out.println(" "+productPrices[3]); System.out.print(String.copyValueOf(new String(productList).toCharArray(), (size-2)*3, 3));System.out.println(" "+productPrices[size-2]); System.out.print(String.copyValueOf(new String(productList).toCharArray(), (size-1)*3, 3));System.out.println(" "+productPrices[size-1]); } }Starting with lines 14, 15 and 16 we see that we cannot use a product structure or class anymore. In fact in jcuda everything that will be passed as a parameter into the kernel has to be a one dimensional array. So in line 14 we create a two dimensional array of bytes represented as a one dimensional array. The product list array size is equal to the number of products times the size in bytes of each product code (in our case that is only three bytes). We also create a second array to store the product prices as floats, and finally the size of our product list also needs to be put into a one dimensional array. I think by now you have probably guessed what I am about to say: This is just plain UGLY! In lines 29 to 45 we populate our product list and product price arrays and then we pass them to the kernel for processing by creating CUDA device pointers, allocating memory and copying the data to the GPU memory before calling the kernel function. Since we had to convert everything into one dimensional arrays of primitives our kernel code needs to change a bit as well: extern "C"__global__ void kernel(long *size, char *productCodes, float *productPrices) { long kernelid = threadIdx.x + blockIdx.x * blockDim.x; long charIndex = kernelid*3; while(kernelid < size[0]) { if (productCodes[charIndex]=='a' && productCodes[charIndex+1]=='b' && productCodes[charIndex+2]=='c') productPrices[kernelid]*=1.10; kernelid += blockDim.x * gridDim.x; charIndex = kernelid*3; } }The only difference is that we multiply the kernelid index by 3 in order to find the correct starting character in our productCodes array. Let’s compile and run the java example: # nvcc -ptx StoreDiscountKernel.cu -o StoreDiscountKernel.ptx # javac -cp ~/GPGPU/jcuda/JCuda-All-0.4.0-beta1-bin-linux-x86_64/jcuda-0.4.0-beta1.jar StoreDiscountExample.java #java -cp ~/GPGPU/jcuda/JCuda-All-0.4.0-beta1-bin-linux-x86_64/jcuda-0.4.0-beta1.jar StoreDiscountExample abc 1.0 bcd 2.0 abc 3.0 bcd 4.0 abc 8999999.0 bcd 9000000.0 abc 1.1 bcd 2.0 abc 3.3 bcd 4.0 abc 9899999.0 bcd 9000000.0 #Albeit being ugly the code works in a similar fashion as in C. So here is a summary of our experience with GPU processing and jcuda: GOOD: Very fast performance (my H/W was an AMD Phenom II Quad Core 3.4Ghz CPU and an NVIDIA Geforce GTX 560 with 336 Cores) GOOD: Asynchronous operation letting the CPU do other tasks BAD: Memory copies impose a considerable performance penalty UGLY: Jcuda is undoubtedly useful if you want to execute CUDA kernels from within Java, but having to convert everything as one dimensional arrays of primitives is really not convenient. In our previous article there were some very interesting comments about java tools for OpenCL (the CUDA alternative for GPGPU). In the next article we will have a look at these tools and see if they look any “prettier” than Jcuda. Reference: GPGPU with Jcuda the Good, the Bad and … the Ugly from our W4G partner Spyros Sakellariou. Related Articles :GPGPU Java Programming CPU vs. GPGPU Java Lambda Syntax Alternatives How does JVM handle locks Java Fork/Join for Parallel Programming Java Best Practices Series How to get C like performance in Java...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: