Featured FREE Whitepapers

What's New Here?


Project Lambda: To Multicore and Beyond

The presentation “Project Lambda: To Multicore and Beyond” (Session 27400 and not to be confused with Brian Goetz’s presentation of the same name) was given in the Hilton San Francisco Grand Ballroom B in the early afternoon on Monday at JavaOne 2011. Even with Grand Ballroom A closed off, this is an extremely large venue for a non-keynote session and there is a large camera (with camera operator) poised to film the presentation. This can probably be construed to mean that the conference organizers anticipated huge interest in coverage of Java SE 8 (JSR 337) and Project Lambda. Alex Buckley (Specification Lead for Java Language and Virtual Machine) and Daniel Smith (Project Lambda Specification Lead) were the presenters and their abstract for this presentation is shown next.This session covers the primary new language features for Java SE 8 – lambda expressions, method references, and extension methods – and explores how existing as well as future libraries will be able to take advantage of them to make client code simultaneously more performant and less error-prone.A functional interface is “an interface with one method.” A lambda expression is “a way to create an implementation of a functional interface.” Lambda expression allows “meat” of functionality to be simply and concisely expressed, especially when compared to the bloat of an anonymous class. Several slides included code examples showing how we’d do it today versus the more succinct representation supported by lambda expressions.Lambda expressions “can refer to any effectively final variables in the enclosing scope.” This means that the final keyword is not required, but rather than it needed to be treated as final (reference not assigned) in the method to be referenced by lambda expression. Some more rules of lambda expressions were announced: the this pointer references enclosing object rather than lambda expression. There is “no need for parameter types in a lambda expression” because they are “inferred based on the functional interface’s method signature” (no dynamic typing necessary). Method references support the “reuse” of “a method as a lambda expression.”Buckley talked about external iteration as the current predominate approach in Java libraries. In this idiom, the “client determines iteration” and is “not thread-safe.” He talked about disadvantages of introduction of a parallel for loop for solving this issue, but extracted some concepts from the parallel for approach: a “filter” and a “reducer.” Buckley introduced the idea that “internal iteration facilitates parallel idioms” because it does not need to be performed serially and is thread-safe.One of the issues Java 8 faces is the need to retrofit libraries to use lambda expressions, but they already have defined interfaces heavily used in the libraries and collections. One approach that might be used to deal with this issue is the use of static extension methods in Java similar to those available in C#. There are numerous advantages to this approach, but there are also some major disadvantages such as not being able to use reflection. The decision was made to revisit the “rule” that one “can’t add an operation to an interface.” Based on thism the subsequent decision was made to add virtual extension methods which provide default implementation in the interface that is used only when the receiver class does not override the method with a default implementation.The slide titled “Are you adding multiple inheritance to Java?!” stated that “Java always had multiple inheritance of types” and “now has multiple inheritance of behavior,” but still does not support “multiple inheritance of state, which causes most problems.” The slide added that “multiple inheritance of behavior is fairly benign” and is really a problem only when compilation occurs in multiple steps. It was emphasized in this presentation that extension methods are a language feature and a virtual machine feature (“everything else about inheritance and invocation is a VM feature!”). As part of this, a bullet stated, “invokeinterface will disambiguate multiple behaviors if necessary.” The non-Java JVM languages can “share the wealth” of extension methods and there was a slide providing three examples of this.Daniel Smith took over the presentation with the topic of parallel libraries. He showed a slide “Behold the New Iterable” which showed an Iterable interface with methods such as isEmpty(), forEach, filter, map, reduce, and into. He also showed a slide on a Parallelterable interface available from Iterable via extension method parallel().Smith provided references to JSR 335, JSR 166, and Project Lambda as part of his slide on community contributions. He also cited four additional sessions at JavaOne 2011 regarding lambda expressions and closely related topics. Smith ended with a quote from Brian Goetz on Project Lambda: …we believe the best thing we can do for Java developers is to give them a gentle push towards a more functional style of programming. We’re not going to turn Java into Haskell, nor even into Scala. But the direction is clear.  ConclusionSmith’s examples made it clear that lambda expressions will provide tremendous benefits to Java developers in their daily tasks. He showed the types of loops we’ve all had to write many hundreds or thousands of times and the cleaner, more concise syntax that lambda expressions make possible. This presentation has made it clear that, with the introduction of lambda expressions, Java will gain many of the benefits enjoyed by dynamically typed languages in terms of fluency and conciseness.Reference: JavaOne 2011: Project Lambda: To Multicore and Beyond from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Immediate gratification v/s delayed gratification in context of Software

This topic cuts across many different disciplines. But here I want to discuss it in context of software development and its process. Its a proven fact that when an individual focuses his efforts on long term goals he is laying down a foundation for quality life for far and near future. Actions that give immediate returns can hardly give any benefits in the long run. Few examples to put this in context of software process – 1. When an individual starts learning a new tool/process/method which is better than current one he/she faces slump in productivity (for e.g. TDD). This leads to an environment where managers starts thinking that change is not helping. Or simply they don’t have courage to face the initial loss of productivity for the sake of increased long term quality benefits. 2. Focus on process rather than product – Its not unusual that supervisor of a development team will over-emphasize on the product delivered rather than question the quality of process by which product is produced. What they are (un)conciously doing is, focusing on short term goal – they want to see a piece of functionality in action – immediately. If they delay the gratification and say, allow technical team to enforce processes/tools then that will ensure that all future deliverables are indeed quality deliverables. Because they are investing in something which will improve the process – which is responsible for the product. 3. Similar but one which has more drastic implications – before requirements can “sink” in, start the development/design. Today’s software methods and RAD tools allow requirements to change constantly during the project, but its important that requirements “sink” in the psyche of the technical team. I believe there is always a motive for coming up with a particular software requirement. That motive hardly changes, if developers are able to study and interact enough with requirements or producers of that requirements(client ?) they will catch the motive which is alway hidden and never communicated. If design/development happens focusing on that motive, the deliverables will be able to better accommodate requirement changes. Obviously the catch lies in deciding in when to stop being invested and take the yield. But unfortunately software teams today try to take the yield way before they have invested enough and hence they don’t get good enough yield. Reference: Immediate gratification v/s delayed gratification in context of Software from our JCG partner Advait Trivedi at the CoolCode blog....

Introduction to Puppet For Vagrant Users

I couldn’t find any good, brief, practical introduction into Puppet that gives you basic working knowledge in minimal time, so here it is. You will learn how to do the elementary things with Puppet – install packages, copy files, start services, execute commands. I won’t go into Puppet installation, nodes, etc. as this introduction focuses on the users of Vagrant, which comes with Puppet pre-installed and working in the serverless configuration. What is Puppet? Puppet is a provisioner – a cross-OS software that sets up operating systems by installing and configuring software etc., based on some form of instructions. Here as an example of such instructions – a manifest – for Puppet: # my_sample_manifest.pp class my_development_env { package {'vim': ensure => 'present' } }# Apply it include my_development_env Running it with puppet apply --verbose --debug my_sample_manifest.pp would install vim on the system. Notice that while Puppet can be run only once, it is generally intended to be run repeatedly to fetch and apply the latest configuration (usually from some source code management system such as Git). Therefore all its operations must be idempotent – they can be performed safely multiple times. The Puppet Configuration File a.k.a. Manifest Puppet manifests are written in a Ruby-like syntax and are composed of the declaration of “resources” (packages, files, etc.), grouped optionally into one or more classes (i.e. templates that can be applied to a system). Each concrete resource has a title (e.g. ‘vim’) followed by a colon and comma-separated pairs of property => value. You may have multiple resources of the same type (e.g. package) as long as they have different titles. The property values are most often strings, enclosed by ‘single quotes’ or alternatively with “double quotes” if you want variables within them replaced with their values. (A variable starts with the dollar sign.) Instead of a name or a value you can also use an array of titles/values, enclosed with [ ]. (Note: It is a common practice to leave a trailing comma behind the last property => value pair.) You can group resources within classes (class my_class_name { … }) and then apply the class with either include (include my_class_name) or with the more complex class (class { ‘my_class_name’: }). You can also include a class in another class like this. Doing Things With Puppet Installing Software With Package The most used way to install software packages is with this simple usage of package: package {['vim', 'apache2']: ensure => 'present' } Puppet supports various package providers and by default picks the system one (such as apt at Ubuntu or rpm at RedHat). You can explicitly state another supported package provider such as Ruby’s gem or Python’s pip. You can also request a particular version of the package (if supported) with ensure => ‘<version>’ or to the latest one with ensure => ‘latest’ (this will also reinstall it whenever a new version is released and Puppet runs). In the case of ensure => ‘present’ (also called ‘installed’): if the package already is installed then nothing happens otherwise the latest version is installed. Copying and creating Files With File Create a file with content specified in-line: file {'/etc/myfile.example': ensure => 'file', content => 'line1\nline2\n', } Copy a directory including its content, set ownership etc.: file {'/etc/apache2': ensure => 'directory', source => '/vagrant/files/etc/apache2', recurse => 'remote', owner => 'root', group => 'root', mode => '0755', } This requires that the directory /vagrant/files/etc/apache2 exists. (Vagrant automatically shares the directory with the Vagrantfile as /vagrant in the VM so this actually copies files from the host machine.With the master-agent setup of Puppet you can also get files remotely, from the master, using the puppet:// protocol in the source.) You can also create files based on ERB templates (with source => template(‘relative/path/to/it’)) but we won’t discuss that here. You can also create symlinks (with ensure => link, target => ‘path/to/it’) and do other stuff, reader more in the file resource documentation. (Re)Starting Daemons with Service When you’ve installed the necessary packages and copied their configuration files, you’ll likely want to start the software, which is done with service: service { 'apache2':· ensure => running,· require => Package['apache2'], }(We will talk about require later; it makes sure that we don’t try to start Apache before it’s installed.) On Linux, Puppet makes sure that the service is registered with the system to be started after OS restart and starts it. Puppet reuses the OS’ support for services, such as the service startup scripts in /etc/init.d/ (where service = script’s name) or Ubuntu’s upstart. You can also declare your own start/stop/status commands with the properties of the same names, f.ex. start => ‘/bin/myapp start’. When Everything Fails: Executing Commands You can also execute any shell command with exec: exec { 'install hive': command => 'wget http://apache.uib.no/hive/hive-0.8.1/hive-0.8.1-bin.tar.gz -O - | tar -xzC /tmp', creates => '/tmp/hive-0.8.1-bin', path => '/bin:/usr/bin', user => 'root', } Programs must have fully qualified paths or you must specify where to look for them with path. It is critical that all such commands can be run multiple times without harm, i.e., they are idempotent. To achieve that you can instruct Puppet to skip the command if a file exists with creates => … or if a command succeeds or fails with unless/onlyif. You can also run a command in reaction to a change to a dependent object by combining refreshonly and subscribe. Other Things to Do You can create users and groups, register authorized ssh keys, define cron entries, mount disks and much more – check out Puppet Type Reference. Enforcing Execution Order With Require, Before, Notify etc. Puppet processes the resources specified in a random order, not in the order of specification. So if you need a particular order – such as installing a package first, copying config files second, starting a service third – then you must tell Puppet about these dependencies. There are multiple ways to express dependencies and several types of dependencies:Before and require – simple execution order dependency Notify and subscribe – an enhanced version of before/require which also notifies the dependent resource whenever the resource it depends on changes, used with refreshable resources such as services; typically used between a service and its configuration file (Puppet will refresh it by restarting it)Ex.: service { 'apache2': ensure => running, subscribe => File['/etc/apache2'], require => [ Package['apache2'], File['some/other/file'] ], } Notice that contrary to resource declaration the resource reference has the resource name uppercased and the resource title is within []. Puppet is clever enough to derive the “require” dependency between some resource that it manages such as a file and its parent folder or an exec and its user – this is well documented for each resource in the Puppet Type Reference in the paragraphs titled “Autorequires:”. You can also express dependencies between individual classes by defining stages, assigning selected classes to them, and declaring the ordering of the stages using before & require. Hopefully you won’t need that. Bonus Advanced Topic: Using Puppet Modules Modules are self-contained pieces of Puppet configuration (manifests, templates, files) that you can easily include in your configuration by placing them into Puppet’s manifest directory. Puppet automatically find them and makes their classes available to you for use in your manifest(s). You can download modules from the Puppet Forge. See the examples on the puppetlabs/mysql module page about how such a module would be used in your manifest. With Vagrant you would instruct Vagrant to provide modules from a particular directory available to Puppet with config.vm.provision :puppet, :module_path => 'my_modules' do |puppet| puppet.manifest_file = 'my_manifest.pp' end (in this case you’d need manifest/ next to your Vagrantfile) and then in your Puppet manifest you could have class { ‘mysql’: } etc. Where to Go Next? There are some things I haven’t covered that you’re likely to encounter such as variables and conditionals, built-in functions such as template(..), parametrized classes, class inheritance. I have also skipped all master-agent related things such as nodes and facts. It’s perhaps best to learn them when you encounter them. In each case you should have a look at the Puppet Type Reference and if you have plenty of time, you can start reading the Language Guide. In the on-line Puppet CookBook you can find many useful snippets. You may also want to download the Learning Puppet VM to experiment with Puppet (or just try Vagrant). Reference: Minimalistic Practical Introduction to Puppet (Not Only) For Vagrant Users from our JCG partner Jakub Holy at the The Holy Java blog....

Blind Dating for Geeks: Questions Candidates Should Ask During Interviews

After a few ‘Questions Candidates Ask In Interviews’ themed articles appeared in my Twitter stream, I was reminded of an article I wrote two years ago called ‘Best Questions To Ask The Interviewer, And When To Ask Them‘. I think one key element missing in the new articles is the ‘when to ask them’ detail that I feel is incredibly important. Also, being that JobTipsForGeeks is aimed at technology professionals, there are some nuances that do not apply to other industries. Why is the timing of when the questions are asked important? An interview is nothing more than a blind date, with the goal on both sides being to find out if you want to start seeing each other in a somewhat committed fashion. You want to discover as much as you can about the other party, but first you have to set a positive tone and build trust. We surely would want to find out if our blind date is, say, a serial killer – but leading off with the ‘Are you a serial killer?’ question would seem rude, and we probably wouldn’t get an honest answer anyway. Below is a list of the best questions to ask in chronological order. Please keep in mind that you would need to restart from the beginning for every new person that you meet in situations where you meet with multiple participants individually. You may not be afforded the opportunity to ask all the questions based on time constraints, so use at least one question from the first section for each person and try to use all the questions at least once at some point in the process.Setting a positive interview tone OR Cocktails and light conversation Question: “What is your background and how did you come to work for COMPANY?” Reason to ask it: Most people genuinely like to talk about themselves (those that do not share this trait will probably not be in the interview), so give them a chance to do so. Don’t be afraid to toss in some remarks and perhaps a follow-up question regarding their background if appropriate. You may learn that you have some shared history with this person that could give you a potential ‘in’. Question: “What do you like best about your job and about working for COMPANY?” Reason to ask it: This question serves two purposes. First, it gives the employee the opportunity to speak well of the company, which again will give an initial positive vibe to your dialogue. Secondly, what the employee chooses to say they like best can be quite telling. If their answer is ‘environment’, ‘work/life balance’ or ‘the people’, that is a universal positive. If the response is ‘the money’ or ‘vacation time’, you may want to dig deeper to find out why. BONUS Question: (If possible, find a fairly recent newsworthy item about the company that is both appropriate and positive, and ask an insightful question about it) Reason to ask it: This shows that you have done your homework before the interview and that you want to be taken seriously as a candidate. This is something that you want to fully research to prevent making a huge mistake that would make recovery impossible.Discovery OR Gentle interrogation of your date during dinner Question: What are the biggest challenges you face? Reason to ask it: The reason for using the word ‘challenges’ is that it does not have a negative connotation, whereas asking someone for the ‘worst’ element of their job will not give a positive impression. The answer here will start to create an image of what this company is going through today and what the landscape is for tomorrow. At a start-up, you may hear answers about financial challenges, limited resources and a fast pace. Some industries are known for heavy regulation getting in the way of progress. This is all valuable information. Question: What would the typical day be like for me at COMPANY? Reason to ask it: You may get an answer that gives you tangible insight into work/life balance (‘I usually get home in time to watch Jimmy Fallon’), how much of your day may be spent coding or doing other duties, how many meetings you may be pulled into, etc. Question: What was your career path here and what is a typical career path for my role at COMPANY? Reason to ask it: Find out if they promote from within and if there are separate technical and non-technical/pure management tracks. Will you be forced into a management role? Question: How would you describe the environment? Reason to ask it: Asking an open-ended question like this could lead in several directions. You should be able to ascertain if it is cut-throat or cooperative, how much support is given to technologists, and whether you will be expected to work with teams or as a solo entity. Question: Management styles? Development processes and methodologies in place? Reason to ask it: Engineers preferences for structure vary greatly. Be sure to drill down to get the best understanding of whether their practices are aligned with your views. Question: Tech stack? Reason to ask it: If their answer is a list of products and technologies that are severely dated, it could mean the company doesn’t invest in or even investigate the latest and greatest. Conversely, if they seem overly concerned with bleeding-edge, perhaps they are making tech decisions based on cool factor more than quality. Be sure to listen for what can be telling patterns, such as an abundance of tools from a particular vendor or a wide variety of open source tools. This question also gives you an opening to discuss your experience (and preferences) relevant to their stack. Question: Why is this position open? Reason to ask it: Growth or promotion are the two most desirable answers. Perhaps this position is a launching board into higher level positions, or maybe it is a dead-end that will burn you out. Closing the deal OR Last call and the drive home)… Question: What qualities/background do you think would be key to making someone successful in this position? Reason to ask it: During that last set of questions, some negative topics could have surfaced. This gets things back on the positive side before the end, as well as giving you more info on whether you would be hitting the ground running or may require some learning curve. (NOTE: This question could also be used as an ice breaker early on) Question: What projects are just getting ramped up or are on the horizon? Reason to ask it: Interviewers will enjoy discussing new endeavors that they think you will find most interesting. If they talk about fixing bugs and maintenance, chances are that is what you will be doing in this job for the foreseeable future. Ideally, the interviewer’s excitement should be palpable. Question: Where do you see yourself in five years here at COMPANY? Reason to ask it: This question is generally asked of candidates, so turning that around should provide insight into how he/she really feels about the firm and their prospects. Again, it lets the interviewer talk about himself/herself again in a positive fashion, and if the interviewer has a sense of humor expect an attempt to use it on this question. The Conclusion OR Goodnight Question: Is there anything else I can answer for you or any more information I can provide to help you in your decision? Reason to ask it: It shows you are forthcoming and trying to be helpful. Question: I am very interested in this opportunity (or similar sentiment). When do you expect COMPANY might be making a decision? Reason to ask it: Showing interest is vital, and asking about their timing could lead to information on other candidates. It also may prompt them to ask about your availability to start, which is an obvious buying sign. You want to do this is with as little pressure as possible. Close by thanking the interviewer for taking the time to speak with you and tell him/her that you look forward to the next steps. Good luck. Reference: Blind Dating for Geeks: Questions Candidates Should Ask (and when to ask them) During Interviews from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Measure execution time in Java – Spring StopWatch Example

There are two ways to measure elapsed execution time in Java either by using System.currentTimeinMillis() or by using System.nanoTime() . These two methods can be used to measure elapsed or execution time between two method calls or event in Java. Calculating elapsed time is one of the first thing Java programmer do to find out how many seconds or millisecond a method is taking to execute or how much time a particular code block is taking. Most of Java programmer are familiar with System.currentTimeInMillis() which is there from beginning while a new version of more precise time measurement utility System.nanoTime is introduced in Java 1.5, along with several new features in language like Generics, Enum types, auto boxing and variable arguments or varargs. You can use any of them to measure execution time of method in Java. Though its better to use System.nanoTime() for more precise measurement of time intervals. In this Java programming tutorial we will see a simple Java program to measure execution time by using System.nanoTime() and Spring framework’s StopWatch utility class. This article is in continuation of my post on covering fundamental Java concepts like How to compare String in Java, How to write equals method in Java correctly and 4 ways to loop HashMap in Java. If you haven’t read them already you may find them useful.  Java Program example to measure execution time in Java  Here is a code example for measuring elapsed time between two code blocks using System.nanoTime, M any open source Java libraries like Apache commons lang, Google commons and Spring also provides StopWatch utility class which can be used to measure elapsed time in Java. StopWatch improves readability to minimize calculation error while calculating elapsed execution time but beware that StopWatch is not thread safe and should not be shared in multi-threading environment and its documentation clearly says that this is more suitable in development and test environment for basic performance measurement rather performing time calculation in production environment. import org.springframework.util.StopWatch;/** * Simple Java Program to measure elapsed execution time in Java * This Java Program shows two ways for measuring time in Java, by using System.nanoTime() which was * added in Java 5 and StopWatch which is a utility class from Spring Framework. */ public class MeasureTimeExampleJava {public static void main(String args[]) { //measuring elapsed time using System.nanoTime long startTime = System.nanoTime(); for(int i=0; i < 1000000; i++){ Object obj = new Object(); } long elapsedTime = System.nanoTime() - startTime; System.out.println("Total execution time to create 1000K objects in Java in millis: " + elapsedTime/1000000); //measuring elapsed time using Spring StopWatch StopWatch watch = new StopWatch(); watch.start(); for(int i=0; i < 1000000; i++){ Object obj = new Object(); } watch.stop(); System.out.println("Total execution time to create 1000K objects in Java using StopWatch in millis: " + watch.getTotalTimeMillis()); } }Output: Total execution time to create 1000K objects in Java in millis: 18 Total execution time to create 1000K objects in Java using StopWatch in millis: 15Which one should you use for measuring execution time in Java It depends which options are available to you, if you are working in below JDK 1.5 version than System.currentTimeInMillis() is the best option in terms of availability while after JDK 1.5 System.nanoTime is great for measuring elapsed time as it is more accurate and uses accurate system clock and can measure upto nano second precision. while if you are using any of Java open source library mentioned above, mostly Spring than StopWatch is also a better option but as I said earlier StopWatch is not thread-safe and should only be used in development and test environment. Just like SimpleDateFormat is not thread-safe and you can use ThreadLocal to create per thread SimpleDateFormat you can do the same thing with StopWatch as well. But I don’t think StopWatch is heavy object like SimpleDateFormat . That’s all on how to measure elapsed time or execution time in Java. Get yourself in habit of measuring performance and execution time of important piece of codes specially methods or loops which executes most of the time. Those are the places where an small optimization in code result in bigger performance gain. Reference: How to measure elapsed execution time in Java – StopWatch Example from our JCG partner Javin Paul at the Javarevisited blog....

Cross Site Scripting (XSS) and prevention

Variants of Cross site scripting (XSS) attacks are almost limitless as mentioned on the OWASP site (https://www.owasp.org/index.php/Cross-site_Scripting_(XSS)). Here I propose to use a Servlet Filter based solution for sanitization of HTTP Request.The attackLets see how an XSS attack manifests itself. Attached is an over simplified portlet which shows a scenario which is very common in social and collaboration based systems like forums. See below psuedo-sequence diagram.Here, 1. There is a form available where user can enter his comments with a submit button and textbox named “mytext”. User A renders this form. 2. User A enters a java script into input text box and submits the form (this is the step where evil enters your app). Just to make you see the problem; imagine that the script entered by user sends cookies stored by the app to an attacker’s site. 3. User B logs into the system and he wants to see the comments provided by User A. So he goes to respective page where system renders value of “mytext” provided by A. 4. Browser renders value of “mytext”, which is a java script that fetches all the cookies of current site stored for User B and sends it to the Attackers system.The prevention (better than cure, always) We will see how cleansing of HTTP parameters help in thwarting off this kind of attack. For this attack to be successful what kind of response was sent to browser when B rendered A’s comments? Something like – <div>A's Comments</div> <div> <script> <!-- This script will get all cookies and will send them to attacker's site. --> </script> </div>As you can see, the attack was possible due to the fact that, for a browser, an HTML document is mix of markup & executable code. The ability to mix executable code with markup is deadly combination which attackers can exploit. Using a Servlet Filter we can cleans all the input parameters and remove all special characters that can denote executable instructions for browser. This way no evil enters the system. Here is a very simple Servlet Filter that does this. A wrapper over HttpServletRequest is used and methods are override to return request parameter values after escaping. For escaping I suggest using StringEscapeUtils of Apache Commons project instead of doing some custom coding.Another way is to let the users enter whatever they want but while rendering convert <,>,&,’,” to their corresponding character entity codes. Typically this can be done as using JSTL – <div>A's comments</div> <div> <c:out value="${comments}" escapeXml="true" /> </div>This approach is especially useful where users can share code snippets with each other.Based on interaction between user and the system many other clever ways of launching an XSS attacks can be devised. But having absolute control over system input will can surely guard agains such attacks.Reference: XSS and prevention from our JCG partner Advait Trivedi at the CoolCode blog....

JBoss BRMS with JasperReports for Reporting

Introduction Jasperreports is a free downloadable library that can be used for generating rich reports for Java EE applications. This guide also provides the steps to generate report templates using Jasper iReport designer. Software requirementsJBoss BRMS 5.3 (from customer portal, http://access.redhat.com) JasperReports 4.6.0 Jasper iReports Maven (for building report server) Ant (for building JasperReports)  Adding on JasperReports Just follow these steps to get it up and running.install JBoss BRMS 5.3 standalone. create the following directories in in BRMS install$JBOSS_HOME/server/default/data/Jasper $JBOSS_HOME/server/default/data/Jasper/OutputDownload (latest) Jasperreports 4.6.0 from the following locationhttp://jasperforge.org/projects/jasperreports/Extract the contents of the downloaded folder in a local directory. Goto jasperreports-4.6.0-project\jasperreports-4.6.0 directory where the build.xml file resides and do an ant build. This will create the distribution jars at the following location $path_to_jasper\jasperreports-4.6.0-project\jasperreports-4.6.0\distJasperreports-applet-4.6.0 Jasperreports 4.6.0.jar Jasperreports-fonts-4.6.0 Jasperreports-javaflow-4.6.0Copy the above jars to the following location$JBOSS_HOME\server\default\deploy\gwt-console-server.war\WEB-INF\libGet report server code from github repository from the following locationhttps://github.com/bijyek/Jasper-Report-Server/tree/master/bpmc-report-server-32b4d53Go to the root directory of the downloaded code and do a maven build: mvn clean install. This will generate the distribution jars; reports-core-1.3.0.jar, report-shared-1.3.0.jar Copy the two previous jar folders from the dist directory to the $JBOSS_HOME\server\default\deploy\gwt-console-server.war\WEB-INF\lib Delete the existing reports-core-final-1.4.0 and reports-shared-final-1.4.0 in the $JBOSS_HOME\server\default\deploy\gwt-console-server.war\WEB-INF\lib Download jasper report templates overall_activity.jasper and overall_activity.jrxml from the following link and copy into $JBOSS_HOME/server/default/data/Jasperhttps://github.com/bijyek/Jasper-Report-Server/tree/master/bpmc-report-server-32b4d53/templatesCopy the following library jars from $path_to_jasper\jasperreports-4.6.0-project\jasperreports-4.6.0\lib to the following location $JBOSS_HOME\server\default\deploy\gwt-console- server.war\WEB-INF\libcommons-digester-2.1 jfreechart-1.0.12 jcommon-1.0.15  Customization and editing .jrxml fileFollow the document JasperReports-Ultimate-Guide-3.0 in docs folder at $path_to_jasper/jasperreports-4.6.0-project\jasperreports-4.6.0\docs Download and install Jasper iReports designer from the following locationhttp://jasperforge.org/projects/ireport/Open the overall_activity.jrxml file in the iReports and edit. Save the .jrxml file and compile by clicking on the preview tab. Copy both the .jrxml (only for future reference) and .jasper file to the $JBOSS_HOME/server/default/data/JasperReference: JBoss BRMS 5.3 – Adding on JasperReports for Reporting from our JCG partner Eric D. Schabell at the Thoughts on Middleware, Linux, software, cycling and other news… blog....

What can you get out of Kanban?

I’ve spent the last year or so learning more about Kanban, how to use it in software development and IT operations. It’s definitely getting a lot of attention, and I want to see if it can help our development and operations teams work better. What to Read, What to Read? There’s a lot to read on Kanban – but not all of it is useful. You can be safe in ignoring anything that tries to draw parallels between Japanese manufacturing and software development. Anyone who does this doesn’t understand Lean manufacturing or software development. Kanban (big K) isn’t kanban (little k) and software development isn’t manufacturing. Kanban in software development starts with David Anderson’s book. It’s heavy on selling the ideas behind Kanban, but unfortunately light on facts and examples – mostly because there wasn’t a lot of experience and data to draw on at the time the book was written. Everything is built up from two case studies in small maintenance organizations. The first case study (which you can read here online) goes back to 2005, at a small offshore sustaining engineering team in Microsoft. This team couldn’t come close to keeping up with demand, because of the insanely heavyweight practice framework that they were following (CMMI and TSP/PSP) and all of the paperwork and wasted planning effort that they were forced to do. They were spending almost half of their time on estimating and planning, and half of this work would never end up being delivered because more work was always coming in. Anderson and another Microsoft manager radically simplified the team’s planning, prioritization and estimation approach, cutting out a lot of the bullshit so that the team could focus instead on actually delivering what was important to the customers. You can skip over the theory – the importance of Drum-Buffer-Rope – Anderson changed his mind later on the theory behind the work anyways. As he says: “This is a case study about implementing common sense changes where they were needed”. The second case study is about Anderson doing something similar with another ad hoc maintenance team at Corbis a couple of years later. The approach and lessons were similar, in this case focusing more on visual tracking of work and moving away from time boxing for maintenance and break/fix work, using explicit WIP limits instead as the forcing function to control the team’s work. The rest of the book goes into the details of tracking work on task boards, setting and adjusting limits, and just-in-time planning. There’s also some discussion of effecting organizational change and continuous improvement, which you can get from any other Agile/Lean source. Corey Ladas’ book Scrumban is a short collection of essays on Kanban and combining Scrum and Kanban. The book is essentially built around one essay, which, taking a Lean approach to save time and money, I suggest that you read here instead. You don’t need to bother with the rest of the book unless you want to try and follow along as Ladas works his ideas out in detail. The basic ideas are the same as Anderson (not surprising, since they worked together at Corbis): Don’t build features that nobody needs right now. Don’t write more code than you can test. Don’t test more code than you can deploy. David Anderson has a new book on Kanban coming out soon Unfortunately it looks like a rehash of some of his blog posts with updated commentary. I was hoping for something with more case studies and data, and follow-ups on the existing case studies to see if they were able to sustain their success over time. There is other writing on Kanban, by excited people who are trying out (often in startups or small teams), or by consultants who have added Kanban to the portfolio of what they are selling – after all, “Kanban is the New Scrum”. You can keep up with most of this through the Kanban weekly Roundup, which provides a weekly summary of links and discussion forums and news groups and presentations and training in Kanban and Lean stuff. And there’s a discussion group on Kanban development which is also worth checking out. But so far I haven’t found anything, on a consistent basis anyways, that adds much beyond what you will learn from reading Anderson’s first Kanban book.Where does Kanban work best? Kanban was created to help break/fix maintenance and sustaining engineering teams. Kanban’s pull-based task-and-queue work management matches the unpredictable, interrupt-driven and specialized work that these teams do. Kanban puts names and a well-defined structure around common sense ideas that most successful maintenance and support and operations teams already follow, which should help inexperienced teams succeed. It makes much more sense to use Kanban than to try to bend Scrum to fit maintenance and support, or following a heavyweight model like IEE 1219 or whatever it is being replaced by. I can also see why Kanban has become popular with technology startups, especially web / SaaS startups following Continuous Deployment and a Lean Startup model. Kanban is about execution and continuous optimization, removing bottlenecks and reducing cycle time and reacting to immediate feedback. This is exactly what a startup needs to do once they have come up with their brilliant idea. But getting control over work isn’t enough, especially over the longer-term. Kanban’s focus is mostly tactical, on the work immediately in front of the team, on identifying and resolving problems at a micro-level. There’s nothing that gets people to put their heads up and look at what they can and should do to make things better on a larger-scale – before problems manifest themselves as delays. You still need a layer of product management and risk management over Kanban, and you’ll also need a technical software development practice framework like XP. I also don’t see how – or why – Kanban makes much of a difference in large-scale development projects, where flow and tactical optimization aren’t as important as managing and coordinating all of the different moving pieces. Kanban won’t help you scale work across an enterprise or manage large programs with lots of interdependencies. You still have to do project and program and portfolio management and risk management above Kanban’s micro-structure for managing day-to-day work, and you still need a SDLC.Do you need Kanban? Kanban is a tool to solve problems in how people get their work done. Anderson makes it clear that Kanban is not enough in itself to manage software development regardless of the size of the organization – it’s just a way for teams to get their work done better: “Kanban is not a software development life cycle or a project management methodology…You apply a Kanban system to an existing software development process…” Kanban can help teams that are being crushed under a heavy workload, especially support and maintenance teams. Combining Kanban with a triage approach to prioritization would be a good way to get through a crisis. But I don’t see any advantage in using Kanban for an experienced development team that is working effectively. Limiting Work in Progress? Time boxing already puts limits around the work that development teams do at one time, giving the team a chance to focus and get things done. Although some people argue that iterations add too much overhead and slow teams down, you can strip overheads down so that time boxing doesn’t get in the way – so that you really are sprinting. Getting the team to understand the flow of work, making delays and blockers visible and explicit is an important part of Kanban. But Microsoft’s Eric Brechner (er, I mean, “I. M. Wright”) explains that you don’t need Kanban and taskboards to see delays and bottlenecks or to balance throughput in an experienced development team: “Good teams do this intuitively to avoid wasted effort. They ‘right-size’ their teams and work collaboratively to succeed together.” And anybody who is working iteratively and incrementally is already doing or should be doing just-in-time planning and prioritization. So for us, Kanban doesn’t seem to be worth it, at least for now. If your development team is inexperienced and can’t deliver (or don’t know what they need to deliver), or you’re in a small maintenance or firefighting group, Kanban is worth trying. If Kanban can help operations and maintenance teams survive, and help some online startups launch faster, if that’s all that people ever do with Kanban, the world will still be a better place. Reference: What can you get out of Kanban? from our JCG partner Jim Bird at the Building Real Software blog....

DevOps: Flexible configuration management? Not so fast!

You’re in charge of establishing a department-wide deployment automation capability. Your fellow developers are excited about it, and their managers are too. There is no shortage of ideas on how it might work:“Let us create our own workflows!” “We should be able to configure our own servers.” “It should be able to deploy from Nexus, Artifactory, S3, or whatever we choose.” “We can finally use the app versioning scheme my team likes.” “My team should get to do parallel installs if we want” “We should have open APIs so anybody can execute their own deployment solution.” “Each team should be able to configure the middleware for their application’s needs.”Developers hate being told how to do things, so there is a general consensus that if you can make this deployment tool as flexible as possible, you’ll be able to build the best deployment automation system the world has ever seen.Sounds great, except that it’s totally wrong.Flexibility kills qualityDrift is evil. Drift causes downtime and rollbacks. Flexibility creates drift.I’ve been involved in data center migration projects where almost every server in a production farm was configured differently. It’s amazing the application even worked! On many other occasions we have rolled back code because the QA and Prod configurations were so different that our testing failed to uncover critical bugs. Although these environments sound ridiculous, I’m confident that it describes a common scenario across enterprise environments. I will also state that we had talented systems administrators managing the environments, unfortunately each one had the flexibility to manage the systems to their liking.Our initial investment in deployment automation (and what became devops) was largely driven by a need to eliminate drift and increase availability. We knew automated deployments should be driven by data, and server instance data would be sourced from a CMDB. However, we quickly realized that our CMDB schema allowed for configuration drift. This led to one of our first devops principles: Don’t manage problems that you can eliminate.Eliminate drift with inflexible schema data. Tools from operations teams tend to be server or device centric and we wanted our deployment automation to be app and farm centric. In other words, we wanted to deploy apps to a farm entity, where the server instances are attributes of the farm. However, we found traditional schema for configuration data was very flexible. The diagram below shows a typical farm with multiple instances, and each instance has an OS version. Since the OS version can be independently selected for each instance, the schema allows the ability to represent drift across the farm. While architecting our app deployment CMDB (interestingly named deathBURRITO), we specifically did not want to manage farm configuration consistency. We simply wanted a guarantee that our farm deployments did not have drift.A typical CMDB schema that allows farm drift.To achieve this we made a simple change to the schema that did just that – prevented the data from representing farm drift (picture below). Although you can incorrectly represent farm attributes, the data driven deployment is either 100% right or 100% wrong.A better CMDB schema that prevents farm drift.  Gratuitous flexibility and useful flexibilityEliminating schema flexibility to control drift is not that controversial since most people get it — and support it. When you start limiting personal preference, man look out, people get really passionate over stupid things. So we started communicating another one of our devops principles: flexibility is not always a good thing.Your deployment automation should start with inflexibility and provide flexibility as needed. Don’t get me wrong, we absolutely support innovation and the ability to empower our department with tools that enable creativity. I often confuse the hell out of people by saying weird stuff like, “by limiting your flexibility, I can offer you more flexibility.” And I actually mean it — because we focus on the flexibility that is actually valuable. The objective is to distinguish between value-added flexibility and gratuitous flexibility, and eliminate the gratuitous junk.Value-added flexibility can be represented by a middleware option between Tomcat, JBoss and Glassfish. Each solution provides different features to the development team and they should have the ability to choose the best match (within reason) for developing to application requirements. Easy enough, there is value to the options.Gratuitous flexibility can be represented by allowing multiple install directory variations for each Tomcat app. SysAdmins usually have a preference and sometimes make it a very passionate preference. Although the configuration matters, it should support automation and security, not personal preference. There is no inherent value gained by allowing your environment to have different install directories such as /opt, /app, /u01. In fact, allowing options creates complexity for install scripts, logging, permissions, service accounts, monitoring etc. Pick one and restrict the rest.One of the great things about automation is the ability to make the deployment platform deliver what you want, and fail what you don’t want. It’s a platform that gives the devops team enforcement power in the IT department that is rearly available. Like most organizations, you probably have many awesome design standards that are drafted, but in effect are just glorious shelfware documents. Automation empowers your ability to eliminate drift, control flexibility and operationalize the shelfware.So back to my statement about limiting flexibility to offer more flexibility? I will argue that by eliminating all the gratuitous variations, you can simplify environment complexity and eliminate the associated busy work and waste. I also believe that eliminating the gratuitous variations will allow your devops teams to focus on delivering the value of predictable self-service deployments… Real flexibility is the ability to provide your developer and test teams self-service deployments at any time, over weekends and around the clock.Reference: DevOps: Flexible configuration management? Not so fast! from our JCG partner Willie Wheeler at the Skydingo blog....

WSO2 Identity Server: Identity Management platform

WSO2 Identity Server provides a flexible, extensible and robust platform for Identity Management. This blog post looks inside WSO2 Identity Server to identify different plug points available for Authentication, Authorization and Provisioning. WSO2 Identity Server supports following standards/frameworks for authentication, authorization and provisioning. 1. SOAP based authentication API 2. Authenticators 3. OpenID 2.0 for decentralized Single Sign On 4. SAML2 Web Single Sign On 5. OAuth 2.0 6. Security Token Service based on WS-Trust 7. Role based access control and user management API exposed over SOAP 8. Fine-grained access control with XACML 9. Identity provisioning with SCIM 1. SOAP based authentication API WSO2 Identity Server can be deployed over an Active Directory, LDAP [ApacheDS, OpenLDAP, Novell eDirectory, Oracle DS.. etc..] or a JDBC based user store. It’s a matter of configuration change and once done end users/systems can use the SOAP based authentication API to authenticate against the underlying user store.Identity Server by default ships with the embedded ApacheDS – but in a real production setup we would recommend you to go for more production ready LDAP – like OpenLDAP – due to some scalability issues we uncovered in embedded ApacheDS. The connection to the underlying user store is made through an instance of org.wso2.carbon.user.core.UserStoreManager. Based on your requirement you can either implement the above interface or extend org.wso2.carbon.user.core.common.AbstractUserStoreManager.2. Authenticators By default authentication to the WSO2 Identity Server Management console is via username/password. Although this is what we have by default, we never limit the user to use username/password based authentication. It can be based on certificates or any other propitiatory token types – only thing you need to do is to write your custom authenticator. There are two type of Authenticators – Front-end Authenticators and Back-end Authenticators. Front-end Authenticators deal with the user inputs and figure out what exactly it needs to have to authenticate a user. For example, WebSEAL authenticator reads basic auth header and iv-user header from the HTTP request and calls it Back-end counter-part to do the actual validation. Back-end Authenticator exposes it’s functionality out side via a SOAP based service and it can internally get connected to a UserStoreManager. Management console also can have multiple Authenticators configured with it. Based on the applicability and the priority one Authenticator will be picked during the run-time.3. OpenID 2.0 for decentralized Single Sign On WSO2 Identity Server supports following OpenID specifications.OpenID Authentication 1.1 OpenID Authentication 2.0 OpenID Attribute eXchange OpenID Simple Registration OpenID Provider Authentication Policy ExtensionOpenID support is built on top of OpenID4Java – and integrated with the underlying user store seamlessly. Once you deploy WSO2 Identity Server over any existing user store, all the users in the user store will get an OpenID automatically. If you want to use one user store for OpenID authentication and another for the Identity Server management, that is also possible from the Identity Server 4.0.0 on wards. WSO2 Identity Server supports both dumb and smart modes and if you would like you can disable the dumb mode. Disabling dumb mode is needed to reduce the load on OpenID Provider and to force relying parties to use smart mode. Identity Server uses JCache based Infiinispan cache to replicate Associations among different nodes in a cluster. 4. SAML2 Web Single Sign On WSO2 Identity Server supports SAML2 Web Single Sign On. OpenID and SAML2 are both based on the same concept of federated identity. Following are some of the difference between them.SAML2 supports single sign-out – but OpenID does not. SAML2 service providers are coupled with the SAML2 Identity Providers, but OpenID relying parties are not coupled with OpenID Providers. OpenID has a discovery protocol which dynamically discovers the corresponding OpenID Provider, once an OpenID is given. With SAML2, the user is coupled to the SAML2 IdP – your SAML2 identifier is only valid for the SAML2 IdP who issued it. But with OpenID, you own your identifier and you can map it to any OpenID Provider you wish. SAML2 has different bindings while the only binding OpenID has is HTTP5. OAuth 2.0 WSO2 Identity Server supports OAuth 2.0 Core draft 27. We believe there will not any drastic changes between draft v27 and the final version of the specification and keeping an eye on where it’s heading to. Identity Server uses Apache Amber as the underlying OAuth 2.0 implementation.Supports all four grant types listed in the specification, namely Authorization Code grant, Implicit grant, Resource Owner Password grant and Client Credentials grant. Supports Refreshing access tokens with Refresh tokens Supports the ‘Bearer’ token profile. Support for distributed token caching using WSO2 Carbon Caching Framework. Support for different methods of securing access tokens before persisting. An extension point is also available for implementing the custom token securing methods. Extensible callback mechanism to link the authorization server with the resource server. Supports a range of different relational databases to serve as the token store.6. Security Token Service based on WS-Trust WSO2 Identity Server supports Security Token Service based WS-Trust 1.3/14. This is based Apache Rampart.STS is seamlessly integrated with the underlying user store and to issue tokens users can be authenticated against it. User attributes are by default fetched from the underlying user store – but provides extensions points where users can write their own attribute callback handlers. Once those callback handlers are registered with the STS – attributes can be fetched from any user store out side the default underlying user store. STS can be secured with any of the following security scenarios.UsernameToken Signature Kerberos WS-Trust [Here the STS can act as a Resource STS]7. Role based access control and user management API exposed over SOAP WSO2 Identity Server can be deployed over an Active Directory, LDAP [ApacheDS, OpenLDAP, Novell eDirectory, Oracle DS.. etc..] or a JDBC based user store. It’s a matter of configuration change and once done end users/systems can use the SOAP based API to manage users [add/remove/modify] and check user authorizations against the underlying user store.8. Fine-grained access control with XACML WSO2 Identity Server supports XACML 2.0 and 3.0. All the policies will be stored in XACML 3.0 but yet capable of evaluating requests either from 2.0 and 3.0. XACML PDP is exposed via following three interfaces.SOAP based API Thrift based API WS-XACMLIdentity Server can act as both a XACML PDP and a XACML PAP. These components are decoupled from each other. By default all XACML policies are stored inside the Registry – but the users can have their own policy store, by extending PolicyStore API.9. Identity provisioning with SCIM The SCIM support in WSO2 Identity Server is based on WSO2 Charon – and open source Java implementation of SCIM 1.0 released under Apache 2.0 license. WSO2 Identity Server can act as either a SCIM provide or a consumer. Based on a configuration WSO2 IS can provision users to other systems that do have SCIM support.Reference: WSO2 Identity Server : A flexible, extensible and robust platform for Identity Management from our JCG partner Prabath Siriwardena at the Facile Login blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books