Featured FREE Whitepapers

What's New Here?

junit-logo

TestNG or JUnit

For many years now, I have always found myself going back to TestNG whenever it comes to doing Unit Testing with Java Code. Everytime, I picked up TestNG, people have asked me why do I go over to TestNG especially with JUnit is provided by the default development environment like Eclipse or Maven. Continuing the same battle, yesterday I started to look into Spring’s testing support. It is also built on top of JUnit. However, in a few minutes of using the same, I was searching for a feature in JUnit that I have always found missing. TestNG provides Parameterized Testing using DataProviders. Given that I was once again asking myself a familiar question – TestNG or JUnit, I decided to document this so that next time I am sure which one and why. Essentially the same If you are just going to do some basic Unit Testing, both the frameworks are basically the same. Both the frameworks allow you to test the code in a quick and effective manner. They have had tool support in Eclipse and other IDE. They have also had support in the build frameworks like Ant and Maven. For starters JUnit has always been the choice because it was the first framework for Unit Testing and has always been available. Many people I talk about have not heard about TestNG till we talk about it. Flexibility Let us look at a very simple test case for each of the two. package com.kapil.itrader; import java.util.Arrays; import java.util.List; import junit.framework.Assert; import org.junit.BeforeClass; import org.junit.Test;public class FibonacciTest { private Integer input; private Integer expected;@BeforeClass public static void beforeClass() { // do some initialization }@Test public void FibonacciTest() { System.out.println("Input: " + input + ". Expected: " + expected); Assert.assertEquals(expected, Fibonacci.compute(input)); assertEquals(expected, Fibonacci.compute(input)); } }Well, this is example showcases I am using a version 4.x+ and am making use of annotations. Priori to release 4.0; JUnit did not support annotations and that was a major advantage that TestNG had over its competitor; but JUnit had quickly adapted. You can notice that JUnit also supports static imports and we can do away with more cumbersome code as in previous versions. package com.kapil.framework.core; import junit.framework.Assert; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.testng.annotations.BeforeSuite; import org.testng.annotations.Test;public class BaseTestCase { protected static final ClassPathXmlApplicationContext context;static { context = new ClassPathXmlApplicationContext("rootTestContext.xml"); context.registerShutdownHook(); }@BeforeSuite private void beforeSetup() { // Do initialization }@Test public void testTrue() { Assert.assertTrue(false); } }A first look at the two code, would infer that both are pretty much the same. However, for those who have done enough unit testing, will agree with me that TestNG allows for more flexibility. JUnit requires me to declare my initialization method as static; and consequently anything that I will write in that method has to be static too. JUnit also requires me to have my initialization method as public; but TestNG does not. I can use best practices from OOP in my testing classes as well. TestNG also allows me to declare Test Suite, Groups, Methods and use annotations like @BeforeSuite, @BeforeMethod, @BeforeGroups in addition to @BeforeClass. This is very helpful when it comes to writing any level of integration testing or unit test cases that need to access common data sets. Test Isolations and Dependency Testing Junit is very effective when it comes to testing in isolation. It essentially means that there is you can not control the order of execution of tests. And, hence if you have two tests that you want to run in a specific order because of any kind of dependency, you can not do that using JUnit. However, TestNG allows you to do this very effectively. In Junit you can make workaround this problem, but it is not neat and that easy. Parameter based Testing A very powerful feature that TestNG offers is “Parameterized Testing”. JUnit has added some support for this in 4.5+ versions, but it is not as effective as TestNG. You may have worked with FIT you would know what I am talking about. However, the support added in JUnit is very basic and not that effective. I have modified my previous test case to include parameterized testing. package com.kapil.itrader;import static org.junit.Assert.assertEquals;import java.util.Arrays; import java.util.List;import junit.framework.Assert;import org.junit.BeforeClass; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters;@RunWith(Parameterized.class) public class FibonacciTest { private Integer input; private Integer expected;@Parameters public static List data() { return Arrays.asList(new Integer[][] { { 0, 0 }, { 1, 1 }, { 2, 1 }, { 3, 2 }, { 4, 3 }, { 5, 5 }, { 6, 8 } }); }@BeforeClass public static void beforeClass() { System.out.println("Before"); }public FibonacciTest(Integer input, Integer expected) { this.input = input; this.expected = expected; }@Test public void FibonacciTest() { System.out.println("Input: " + input + ". Expected: " + expected); Assert.assertEquals(expected, Fibonacci.compute(input)); assertEquals(expected, Fibonacci.compute(input)); }}You will notice that I have used @RunWith annotation to allow my test case to be parameterized. In this case, the inline method – data() which has been annotated with @Parameters will be used to provide data to the class. However, the biggest issue is that the data is passed to class constructor. This allows me to code only logically bound test cases in this class. And, I will end up having multiple test cases for one service because all the various methods in the Service wil require different data sets. The good thing is that there are various open source frameworks which have extended this approach and added their own “RunWith” implementations to allow integration with external entities like CSV, HTML or Excel files. TestNG provides this support out of the box. Not support for reading from CSV or external files, but from Data Providers. package com.kapil.itrader.core.managers.admin;import org.testng.Assert; import org.testng.annotations.Test;import com.uhc.simple.common.BaseTestCase; import com.uhc.simple.core.admin.manager.ILookupManager; import com.uhc.simple.core.admin.service.ILookupService; import com.uhc.simple.dataprovider.admin.LookupValueDataProvider; import com.uhc.simple.dto.admin.LookupValueRequest; import com.uhc.simple.dto.admin.LookupValueResponse;/** * Test cases to test {@link ILookupService}. */ public class LookupServiceTests extends BaseTestCase {@Test(dataProvider = "LookupValueProvider", dataProviderClass = LookupValueDataProvider.class) public void testGetAllLookupValues(String row, LookupValueRequest request, LookupValueResponse expectedResponse) { ILookupManager manager = super.getLookupManager(); LookupValueResponse actualResponse = manager.getLookupValues(request); Assert.assertEquals(actualResponse.getStatus(), expectedResponse.getStatus()); } }The code snippet above showcases that I have used dataProvider as a value to the annotations and then I have provided a class which is responsible for creating the data that is supplied to the method at the time of invocation. Using this mechanism, I can easily write test cases and its data providers in a de-coupled fashion and use it very effectively. Why I choose TestNG For me the Parameterized Testing is the biggest reason why I choose TestNG over Junit. However, everything that I have listed above is the reason why I always want to spend a few minutes in setting up TestNG in a new Eclipse setup or maven project. TestNG is very useful when it comes to running big test suites. For a small project or a training exercise JUnit is fine; because anyone can start with it very quickly; but not for projects where we need 1000s of test cases and in most of those test cases you will have various scenarios to cover. Reference: TestNG or JUnit from our JCG partner Kapil Viren Ahuja at the Scratch Pad blog....
eclipse-logo

Eclipse Code Formatting Tips

I have been assigned lately some code review/ quality code fix tasks, on a large enterprise Java project and I am trying to assist the existing software development team, while waiting to resume my old duties on another project.Kind of fun, but at the same time dangerous enough, since you don’t want to break anything important or ruin the work of your colleagues while  they rush towards an early release. With no knowledge of the underlying business, you have to review multiple times even the smallest change you are about to do.Sometimes coding style and they way we structure our code proves to be very important for people coming in the project later on, aiming to resume our work or do maintenance work (like I am doing at the moment to a code base that I don’t really master).Tip 1: Write code, thinking the next guy/girl taking over your work.I am a great supporter of the above idea. Sometimes we rush into hacking pieces of code that look smart or great at that time being, but in a couple of weeks even we can not understand how and why we did it that way. Try to keep it simple, add inline comments if you feel so, I really enjoy reading comments in the code explaining why a piece of code was developed that way and not the other way. These small hints act as time savers next time (in the long long future) you will have to review, fix or change this part. Tip 2: Eclipse: Prevent auto formatting in certain pieces of code.There are cases where you have to write pieces of code that look a bit complicated, or you want to save some lines instead of breaking down the logic, especially in some trivial parts. I came up with a couple of long hashCode() and equals() methods that were not amazingly complex but at the same time not obvious. In order to fully understand why some parts of the checks performed on these methods were there, I had to manually format the code  in a form where the logic in ifs and the various logic operators were obvious.  There are other cases where tools like FindBugs or CheckStyle will alarm you about certain development decisions inside such methods – that most of the times will not be valid. Eclipse gives you the freedom to, disable Partially its formatter by adding a special tag in your code. All you have to do is browse through the sections below, under your Eclipse Preferences.Preferences – Java – Code Style - Formatter – Active Profile (you need to have your custom – can not change the eclipse built in) – Edit- Tab (On/Off tabs) enable.It is a good thing to configure your own formatter Profile (Per project is even better, or company wise). You may inherit the basic Eclipse (Built-In) profile and make your own.  When you have the above option enabled then you can exclude the code you don’t want you or a colleague of yours – automatically formatted again, so that it becomes difficult to read (aka ugly). //@formatter:off if(aFlag){ //do this }else if(anotherFlag){ //do that etc } //@formatter:onWatch out: Make sure all the members of the team share either the same options, otherwise upon commit another member with default configuration will turn your custom formatting to the previous state. Tip 3: Eclipse, Put Braces in if statements,loops, equals() etc. This is a long lasting battle, there are developers who dislike braces, some others (like me) who think they improve readability and some others that don’t mind in general. I really hate missing braces (my personal preference) and I usually add them even in simple one line if statements.  You may have a look on the official Java Code Conventions here. //@formatter:off //I don't like this style if(aFlag) System.out.println("Do something"); //I like this style if(anotherFlag){ System.out.println("Do something better"); }Eclipse can help you fix this bad coding style in auto generated style or in formatting as well. There is a ‘Clean up’ section under the Eclipse Java Preferences that enforces ‘blocks’ on related code structures. Preferences – Java – Code Style - Clean up – Active Profile (you need to have your custom – can not change the eclipse built in) – Edit- Tab (Code Style – use blocks in if/while/for/do statements. I have the ‘Always‘ setting checked.Watch out (again): Make sure all the members of the team share either the same options.There is a special case as one of my colleagues indicates, related to the generated by the Eclipse IDE equals() method. In older versions of eclipse there was actually some sort of styling bug but it has been fixed from version 3.5 and onwards. When you select from Source -> Generate has hashCode() and equals()Make sure to select ‘Use blocks in if Statements’. That way you will save some code quality warnings by related tools.That is all for now, many thanks for the concerns, tips and questions to GeorgeK and Andreas.Reference: Eclipse Code Formatting Tips from our JCG partner Paris Apostolopoulos  at the Papo’s log....
scala-logo

First steps with Scala, say goodbye to bash scripts…

Those who know me are aware that I’ve been following play framework, and actively taking part of it’s community, for a couple of years. Playframework 2.0 is right around the corner, and it’s core is programmed in Scala, so it’s a wonderful opportunity to give this object-oriented / functional hybrid beast a try… Like many others, I will pick a very simple script to give my first steps… Finding an excuse to give Scala a try With a couple of friends we are on the way to translate play framework documentation to spanish (go have a look at it at http://playdoces.appspot.com/, by the way, you are more than welcome to collaborate with us) The documentation is composed of a bunch of .textile files, and I had a very simple and silly bash script to track our advance. Every file that has not yet been translated has the phrase “todavía no ha sido traducida” in it’s first line echo pending: `grep "todavía no ha sido traducida" * | wc -l` / `ls | wc -l`Which produced something like pending: 40 / 63Pretty simple, right? I just wanted to develop a simple scala script to count the translated files, and also it’s size, to know how much work we had ahead. Scala as a scripting language Using scala as a scripting language is pretty simple. Just enter some scala code in a text file, and execute it with “scala file.scala“. You can also try it with the interactive interpreter, better knonw as REPL (well, it’s not really an interpreter, but a Read-Evaluate-Print Loop, that’s where the REPL name comes from). In linux, you can also excute them directly from the shell marking the scala file as executable and adding these lines to the beginning of the file. #!/bin/sh exec scala "$0" "$@" !#Tip: you can speed up A LOT script execution by adding a -savecompiled like it says on the scala command man page, like this: #!/bin/sh 2 exec scala -savecompiled "$0" "$@" 3 !#Classes and type inference in scala So I created a DocumentationFile, with a name, length and an isTranslated property. class DocumentationFile(val file: File) {val name = file.getName val length = file.length val isTranslated = (firstLine.indexOf("Esta página todavía no ha sido traducida al castellano") == -1)def firstLine = new BufferedReader(new FileReader(file)).readLine}Scala takes away a lot of boilerplate code. The constructor is right there, along with the class declaration. In our case, the DocumentationFile constructor takes a java.io.File as argument. Scala also makes heavy use of type inference to alleviate us from having to declare every variable’s type. That’s why you don’t have to specify that name is a String, length a Long and isTranslated a Boolean. You still have to declare types on method’s arguments, but usually you can omit them everywhere else. Working with collections Next I needed to get all textile files from the current directory, instantiate a DocumentationFile for each of them, and save them in an Array for later processing. import java.io._val docs = new File(".").listFiles .filter(_.getName.endsWith(".textile")) // process only textile files .map(new DocumentationFile(_))Technically speaking is just one line of code. The “_” is just syntactic sugar, we could have written it in a more verbose way like this: val docs = new File(".").listFiles .filter( file => file.getName.endsWith(".textile") ) // process only textile files .map( file => new DocumentationFile(file) )Or if you are a curly braces fun: val docs = new File(".").listFiles .filter { file => file.getName.endsWith(".textile") // process only textile files } .map { file => new DocumentationFile(file) }Higher order functions Once we have all textile files, we’ll need the translated ones. val translated = docs.filter(_.isTranslated)Here we are passing the filter method a function as parameter (that’s what is called a higher order function). That function is evaluated for every item in the Array, and if it returns true, that item is added to the resulting Array. The “_.isTranslated” stuff is once again just syntactic sugar. We could have also written the function as follows: val translated = docs.filter( (doc: DocumentationFile) => doc.isTranslated )Functional versus imperative: To var or not to var Now I need to calculate the quantity and size of the translated and not yet translated files. Counting the files is pretty easy, just have to use “translated.length” to know how many files have been translated so far. But for counting their size I have to sum the size of each one of them. This was my first attempt: var translatedLength = 0L translated.foreach( translatedLength += _.length )In scala we can declare variables with the “var” and “val” keywords, the first ones are mutable, while the later one ar immutables. Mutable variables are read-write, while immutable variables can’t be reassigned once their value has been established (think of them like final variables in Java). While scala allows you to work in an imperative or functional style, it really encourages the later one. Programming in scala, kind of the scala bible, even teaches how to refactor your code to avoid the use of mutable variables, and get your head used to a more functional programming style. These are several ways I’ve found to calculate it in a more functional style (thanks to stack overflow!) val translatedLength: Long = translated.fold(0L)( (acum: Long, element: DocumentFile) => acum + element.length )//type inference to the rescue val translatedLength = translated.foldLeft(0L)( (acum, element) => acum + element.length )//syntactic sugar val translatedLength = translated.foldLeft(0L)( _ + _.length )// yes, if statement is also an expression, just like the a ? b : c java operator. val translatedLength = if (translated.length == 0) 0 else translated.map(_.length).sumI’ve finally settled with this simple and short form: val translatedLength = translated.map(_.length).sum val docsLength = docs.map(_.length).sumDefault parameters and passing functions as arguments Now I have all the information I needed, so I just have to show it on screen. I also wanted to show the file size in kbs. Once again this was my first attempt: println( "translated size: " + asKB(translatedLength) + "/" + asKB(docsLength) + " " + translatedLength * 100 / docsLength + "% " )println( "translated files: " + translated.length + "/" + docs.length + " " + translated.length * 100 / docs.length + "% " )def asKB(length: Long) = (length / 1000) + "kb"And this was the output: translated size: 256kb/612kb 41%translated files: 24/64 37%Well, it worked, but it could definitely be improved, there was too much code duplication. So I created a function that took care of it all: def status( title: String = "status", current: Long, total: Long, format: (Long) => String = (x) => x.toString): String = {val percent = current * 100 / totaltitle + ": " + format(current) + "/" + format(total) + " " + percent + "%" + " (pending " + format(total - current) + " " + (100-percent) + "%)" }The only tricky part is the format parameter. It’s just a higher order function, that by default just converts the passed number to a String. We use that function like this: println( status("translated size", translatedLength, docsLength, (length) => asKB(length) ) )println( status("translated files", translated.length, docs.length) )And that’s it. It’s really easy to achieve this kind of stuff using scala as a scripting language, and on the way you may learn a couple of interesting concepts, and give your first steps into functional programming. This is the complete script, here you have a github gist and you can also find it in the play spanish documentation project. #!/bin/sh exec scala "$0" "$@" !#import java.io._val docs = new File(".").listFiles .filter(_.getName.endsWith(".textile")) // process only textile files .map(new DocumentationFile(_))val translated = docs.filter(_.isTranslated) // only already translated filesval translatedLength = translated.map(_.length).sum val docsLength = docs.map(_.length).sumprintln( status("translated size", translatedLength, docsLength, (length) => asKB(length) ) )println( status("translated files", translated.length, docs.length) )def status( title: String = "status", current: Long, total: Long, format: (Long) => String = (x) => x.toString): String = {val percent = current * 100 / totaltitle + ": " + format(current) + "/" + format(total) + " " + percent + "%" + " (pending " + format(total - current) + " " + (100-percent) + "%)" }def asKB(length: Long) = (length / 1000) + "kb"class DocumentationFile(val file: File) {val name = file.getName val length = file.length val isTranslated = (firstLine.indexOf("Esta página todavía no ha sido traducida al castellano") == -1)override def toString = "name: " + name + ", length: " + length + ", isTranslated: " + isTranslated def firstLine = new BufferedReader(new FileReader(file)).readLine}Reference: First steps with Scala, say goodbye to bash scripts…   from our JCG partner Sebastian Scarano  at the Having fun with Play framework! blog Related Articles :Scala Tutorial – code blocks, coding style, closures, scala documentation project Scala Tutorial – SBT, scalabha, packages, build systems Fun with function composition in Scala Scala use is less good than Java use for at least half of all Java projects Yes, Virginia, Scala is hard...
java-logo

Java Annotations & A Real World Spring Example

An “annotation” is a type of programming language definition and used as a “marker”. They can be thought as comment lines which programming language engine can understand. They don’t directly affect program execution but affect indirecly if wanted. Definition An annotation is defined with @interface keyword and is similar with an interface. It has attributes which are defined like interface methods. Attributes can have default values. Let’s define an annotation named “Page”, which defines UI pages of an application: public @interface Page { int id(); String url(); String icon() default "[none]"; String name(); default "[none]"; } Usage Annotations are widely used to inform compiler or compile-time/runtime/deployment-time processing. Usage of an annotation is simpler: @Page(id=1, url=”studentView”, icon=“icons/student.png”, name=”Students”) public class StudentWindow extends Window { … } Annotations can also be defined for methods and attributes: @AnAnnotation public String getElementName() {…}@AnAnnotation(type=”manager”, score=3) public int income;Examples 1) Reflection/code generation: Methods having a specific annotation can be processed at runtime: public @interface MyAnnotation { ... } public class TestClass { @MyAnnotation public static method1() { ... } @MyAnnotation public static method2() { ... } @MyAnnotation public static method3() { ... } }public static void main(String[] args) { for (Method method : Class.forName("TestClass").getMethods()) { if (method.isAnnotationPresent(MyAnnotation.class)) { // do what you want } } }2) Spring bean configuration (this section requires Spring bean configuration knowledge): Let’s use our “Page” annotation again: package com.cmp.annotation; public @interface Page { int id(); String url(); String icon() default "[none]"; String name(); default "[none]"; } Say that we have a few classes having @Page annotation in a package: @Page(id=1, url=”studentView”, icon=“icons/student.png”, name=”Students”) public class StudentWindow extends Window { … }If we define a bean configuration as below in a Spring application-context.xml file, Spring will create class instances “which has @Page annotation” placed in “given package”. <context:component-scan base-package="com.cmp.ui" annotation-config="true"> <context:include-filter type="annotation" expression="com.cmp.annotation.Page"/> </context:component-scan>So, we have been enforced Spring to instantiate only a selection of classes at runtime. For more detailed info about annotations, please refer to: http://docs.oracle.com/javase/1.5.0/docs/guide/language/annotations.html http://docs.oracle.com/javase/tutorial/java/javaOO/annotations.html Reference: Java Annotations & A Real World Spring Example from our JCG partner Cagdas Basaraner at the CodeBalance blog. Related Articles :Cloning of Serializable and Non-Serializable Java Objects Java Recursion basics Beneficial CountDownLatch and tricky java deadlock Java Secret: Loading and unloading static fields Use java.util.prefs.Preferences instead of java.util.Properties...
java-interview-questions-answers

Master Detail CRUD operations with Regions ADF 11g

Hi, This is an example that demonstrates how to create a Master Detail relationship between tables by using Regions. The main purpose of regions is the notion of reusability. With regions and bounded task flows we can reuse our pages into many other pages keeping the same functionality and having a more cleaner approach. Download the Sample Application. For this example we are going to use only one Model project and keep things simple. We are going to create our Business Components through JDeveloper and it’s wizards. We are using Master Detail for Departments and Employees.So, we are going to create two Bounded Task Flows that use fragments. One for the Departments One for the employees.In each bounded task flow we drag and drop a view and place the appropriate names of departments and employees.Then in the unbounded flow we create a jspx that will have two Regions defined. One for the Department BTF One for the Employees BTF For Departments we are going to drag and drop the Departments iterator as a form with navigation buttons and submit button. Additionally, we add the createInsert and Delete Operation buttons next to submitWe do the same with employees. The only difference here is that  we drop an editable table and not a form. Additionally we drag it from the hierarchy and not the alone one in our Data Control. This means that we drag the detailed employees.Next, we are going to create an index page in our unbounded task flow that will contain our Bounded Task Flows as regions. In order to that, after we created the index page, we simply drag and drop each Bounded Task Flow as a RegionWe do the same for the Employees Bounded Task Flow. Up to now, we have our hierarchy done and well placed. Since we share the same application module instance, we are good to go!! All that is left now is to place commit and rollback buttons in our Departments fragment and we are done! For the rollback button we have to make a specific adjustment: The emps region needs to be refreshed and indicate that the rollback is performed. For this reason we are going to set the refresh property as follows:So, what we do here is, to set a refresh condition on our detail region. What we say here is, refresh emps fragment when the dept fragments is refreshed. NOTE: this is a simple application demonstrating the ease of use of Regions. It is not intended to cover all aspects of regions. Regards. Reference: Master Detail CRUD operations with Regions ADF 11g from our JCG partner Dimitrios Stassinopoulos  at the Born To DeBug blog. Related Articles :Simple Twitter: Play Framework, AJAX, CRUD on Heroku Spring MVC3 Hibernate CRUD Sample Application Using Groovy – A soft introduction...
eclipselink-logo

Extending your JPA POJOs

Extensibility is an important characteristic in many architectures.  It is a measure of how easy (or difficult) it is to add or change functionality without impacting existing core system functionality. Let’s take a simple example.  Suppose your company have a core product to track all the users in a sports club.  Within your product architecture, you have a domain model represented by JPA POJOs.  The domain model contains many POJOs including – of course – a User POJO. package com.alex.staveley.persistence /** * User entity. Represents Users in the Sports Club. * * Note: The SQL to generate a table for this in MySQL is: * * CREATE TABLE USER (ID INT NOT NULL auto_increment, NAME varchar(255) NOT NULL, * PRIMARY KEY (ID)) ENGINE=InnoDB; */ @Entity public class User { /* Surrogate Key - automatically generated by DB. */ @GeneratedValue(strategy=GenerationType.IDENTITY) @Id private int id; private String name;public int getId() { return id; } public void setName(String name) { this.name=name; } public String getName() { return name; } }Now, some customers like your product but they need some customisations done before they buy it.  For example, one customer wants the attribute birthplace added to the User and wants this persisted.  The logical place for this attribute is – of course – in the User POJO, but no other customer wants this attribute.  So what do you do? Do you make a specific User class just for this customer and then swap it in just for them?  What happens when you change your Product User class then?  What happens if another customer wants another customisation? Or changes their mind?  Are you sensing things are going to get messy? Thankfully, one implementation of JPA: Eclipselink helps out here. The 2.3 release (available since June 2011, latest release being a 2.3.2 maintenance released just recently, 9th December, 2011) includes some very features which work a treat for this type of scenario. Let’s elaborate. By simply adding the @VirtualAccessmethods Eclipselink annotation to a POJO we signal to Eclipselink that the POJO may have some extra (also known as virtual) attributes. You don’t have to specify any of these extra attributes in code, otherwise they wouldn’t be very virtual! You just have to specify a generic getter and setter to cater for their getting and setting. You also have to have somewhere to store them in memory, something like a good old hashmap – which of course should be transient because we don’t persist the hashmap itself. Note: They don’t have to be stored in a HashMap, it’s just a popular choice! Let’s take a look at our revamped User which is now extensible! @Entity @VirtualAccessMethods public class User { /* Surrogate Key - automatically generated by DB. */ @GeneratedValue(strategy=GenerationType.IDENTITY) @Id private int id; private String name; @Transient private Map<String, Object> extensions = new HashMap(); public int getId() { return id; } public void setName(String name) { this.name=name; } public String getName() { return name; } public <t> T get(String name) { return (T) extensions.get(name); } public Object set(String name, Object value) { return extensions.put(name, value); } }So, is that it?  Well there’s a little bit more magic.  You have to tell eclipselink about your additional attributes.  More specifically: what their names and datatypes are. You do this by updating your eclipselink-orm.xml which resides in the same META-INF folder that the persistent.xml is in. <?xml version="1.0" encoding="UTF-8"?> <entity-mappings xmlns="http://www.eclipse.org/eclipselink/xsds/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.eclipse.org/eclipselink/xsds/persistence/orm http://www.eclipse.org/eclipselink/xsds/eclipselink_orm_2_1.xsd" version="2.1"> <entity class="com.alex.staveley.persistence.User"> <attributes> <basic name="thebirthplace" attribute-type="String" access="VIRTUAL"> <column name="birthplace"/> <access-methods get-method="get" set-method="set"/> </basic> </attributes> </entity> </entity-mappings>Now this configuration simply states, the User entity has an additional attribute which in java is “thebirthplace” and it is virtual.  This means it is not explictly defined in the POJO but if we were to debug things, we’d see the attribute having the name ‘thebirthplace’ in memory. This configuration also states that the corresponding database column for the attribute is birthplace.  And eclipselink can get and set this method by using the generic get /set methods. You wanna test it? Well add the column to your database table.  In MySql this would be:         alter table user add column birthplace varchar(64) Then run this simple test: @Test public void testCreateUser() { User user = new User(); user.setName("User1Name"); user.set("thebirthplace", "donabate"); entitymanager.getTransaction().begin(); entitymanager.persist(user); entitymanager.getTransaction().commit(); entitymanager.close(); }So now, we can have one User POJO in our product code which is extensible.  Each customer can have their own attributes added to the User – as they wish.  And of course, each customer is separated from all other customers very easily by just ensuring each customer’s extensions resides in a specific eclipslink-orm.xml.  Remember, you are free to name these files as you want and if you don’t use the default names you just update the persistence.xml file to state what names you are using.  This approach means that when we want to update User in our product, we only have to update one and only User POJO (because we have ensured there is only one).  But when specific attributes have to be added for specific customer(s), we don’t touch the User POJO code.  We simple make the changes to the XML and do not have to recompile anything from the core product. And of course, at any time it is easy to see what the customisations are for any customer by just simply looking at the appropriate eclipselink-orm.file. Ye Ha. Happy Extending!  References:Extending your JPA POJOs   from our JCG partner Alex Staveley  at the Dublin’s Tech Blog  http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Advanced_JPA_Development/Extensible_Entities http://www.eclipse.org/eclipselink/Related Articles :The Persistence Layer with Spring Data JPA High performance JPA with GlassFish and Coherence – Part 1 Avoid Lazy JPA Collections JBoss 4.2.x Spring 3 JPA Hibernate Tutorial Java Persistence API: a quick intro…...
java-logo

OpenShift Express: Deploy a Java EE application (with AS7 support)

During the past few years, I’ve been hearing about “cloud” services more and more. Initially I wasn’t really curious to try them out. But a few months (a year?) back, I decided to see what it was all about. I have been involved with Java EE development for more than 7 years now, so I decided to see what it takes to deploy my Java EE application to the cloud. I set out looking for documentation and other usual blog articles and such. At that point, whichever cloud services I thought of trying out, would require me to provide my credit card details even to try out a trial application. I wasn’t too keen to provide my credit card details just to try out a few applications of mine. So I kind of gave up on trying out my applications on cloud, although I kept reading about what other developers were doing to deploy their applications on cloud. At about the same time, I came across this elaborate writeup on how one of the developers had setup his application involving Weld and JSF, on Google App Engine – Part1, Part2. The blog was well written and explained what was required to get your Java EE application up and running on a cloud service. But the important piece of information in those articles was that, users who had an application which was implemented as per Java EE standards (to be portable) had to change many of the application parts just to get them running on cloud. This was because many of the Java EE technologies weren’t supported by the cloud service provider. This didn’t look appealing to me. After all, what would I gain by doing that. So at that point, I as a Java EE developer, wasn’t too interested in experimenting with deploying my application on cloud.Enter OpenShift! But this month, the OpenShift announcement about being able to deploy your JBoss AS7 Java EE applications to cloud, caught my eye. By the way, I do work for RedHat and am part of the JBoss AS7 team, but I wasn’t keeping a watch on what the OpenShift team was upto, so this announcement came as a pleasant surprise! So I decided to give it a try. After reading some of the documentation on the project site, I found out that OpenShift offers two different services. One was “OpenShift Express” and the other “OpenShift Flex”. OpenShift Express is free to use (that was one more good news for me) while OpenShift Flex requires your Amazon EC2 credentials and you’ll be charged for the EC2 usage (there’s however a free trial going on currently). I decided to give OpenShift Express a try since it was free and suits my current needs of just trying out a quick and simple Java EE application deployment and access to that application. So here’s what I did to be able to deploy my Java EE application which uses the technologies available in Java EE6 webprofile and which deploys fine on my local AS7 instance, to OpenShift Express. You might have already guessed that I’m no expert about OpenShift (or cloud services in general), so in this article doesn’t have any advanced technical details, but contains more of a how-to on deploying Java EE applications to OpenShift Express. So let’s start then. Sign up The first step is to sign up here to create an account for yourself. The sign up just requires a valid email address to which your account details will be dispatched. On signing up, you’ll receive a mail which contains a link to activate your account and will take you to the login screen. Login using the email id and password that you used to register. Get Access to OpenShift Express So let’s got the OpenShift Express page. On that page you will notice a “Get Access to Express” button on left hand side. Click on it to get access to “Express”. You’ll be notified (immediately) through a mail to the email id which you registered. Check the mail which will contain a link to a quick start guide, to help you get started with OpenShift Express. Install client tools The quick start contains the instructions to get you started with the installation procedure. The first step includes installing a few client tools on your system to help you interact with OpenShift. Follow those instructions to install the client tools (I won’t repeat them here, since it’s well explained in that guide). Create a domain Now once we have the client tools, we are now ready to setup our “domain” on the OpenShift cloud. Setting up a domain will create a unique domain name that you can use for your applications. The domain name will be part of the URL which you will use to access the application and which you’ll publish to your users for access. The command to create the domain is easy: rhc-create-domain -l <email-id-you-registered-with> -n <domain-name-of-your-choice>Running that command will ask you for the password that you used to register. Enter that password and let the command complete (a few seconds). The “rhc-create-domain” is part of the client tool that you installed earlier. If you haven’t yet installed those tools, then you won’t be able to use these commands, so don’t miss that step! The “rhc-create-domain” accepts a few more optional parameters. To see the list of accepted parameters, you can run the following command: rhc-create-domain --helpCreate a jbossas-7.0 application Once you have successfully create a domain, your next step is to create an “application”. Currently OpenShift Express supports different “types” of applications, each of them backed by Git (which is a version control system). At the point of writing this post, the supported application types are jbossas-7.0, perl-5.10, rack-1.1, wsgi-3.2 and php-5.3. I’m interested in deploying a Java EE application, so I’ll be creating a “jbossas-7.0″ application. This type of application provides you a JBoss AS 7.0.0 instance in the OpenShift cloud to which you can deploy your applications. So let’s now create an application of type jbossas-7.0. Note that the term “application” can be a bit confusing (atleast I found it a bit confusing) because all you are doing at this point is setting up JBoss AS7 server. The command to create an application is rhc-create-app. The rhc-create-app accepts multiple options. For a complete list of options run: rhc-create-app --helpTo create a jbossas-7.0 application, we’ll run the following command: rhc-create-app -a <application-name> -l <email-id-you-used-to-register> -t jbossas-7.0 -r <path-on-local-filesystem-for-the-repository>Running that command will ask you for the password that you used to register. Enter that password and let the command complete (a few seconds). The -a option lets you specify the name for your application. This name will be part of the URL that you use to access your application. If your application name is “foo” and (the previously created) domain name is “bar”, then the URL to access your application will be http://foo-bar.rhcloud.com/. The -t option in that command, specifies the application type. In our case, we are interested in jbossas-7.0 The other option of importance is the -r option which you’ll use to point to a folder on your local filesystem where OpenShift will store all the data related to your application. Part of that data will be a local copy of the git repo (version control system). We’ll see this in more detail later on in this blog. Access your server URL So once you run the command and it successfully completes, it will print out the URL where the application is available. You can (immediately) use that URL to access the application. On accessing that URL you’ll notice a welcome page, which is an indication that the application has been installed successfully and available for access. For me, the URL to the newly created application was http://jaikiran-jbossas.rhcloud.com/. So at this point, we have created a domain and then an application and have made sure that it’s accessible to the world. In short, your cloud server is up and running and you can now deploy your Java EE applications to that server. Create and deploy a Java EE application So let’s now move to the step of creating and deploying a Java EE application. I didn’t have any specific application in mind, but wanted to deploy an application which would involve accessing a database. Instead of creating a brand new application, I decide to use one of the quick start applications that come with JBoss AS7. The quick start applications for JBoss AS7 are available for download here. Once you have downloaded the quick start archive, unzip it to a location of your choice. Building the quick start examples will require Maven build tool be installed on your system. The details about the quick start applications and how to build them can be found here. Those interested in trying it out themselves, might want to look at that guide. I chose the “kitchensink” application from those quick starts. The kitchensink application uses Java Persistence API (JPA) for persistence and by default uses the java:jboss/datasources/ExampleDS which is shipped by default by JBoss AS7. The ExampleDS uses H2 as its database.This is how the persistence.xml looks like: <?xml version="1.0" encoding="UTF-8"?> <persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistencehttp://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"> <persistence-unit name="primary"> <!-- If you are running in a production environment, add a managed data source, the example data source is just for proofs of concept! --> <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source> <properties> <!-- Properties for Hibernate --> <property name="hibernate.hbm2ddl.auto" value="create-drop" /> <property name="hibernate.show_sql" value="false" /> </properties> </persistence-unit> </persistence>That’s enough for me now, to show how to deploy the application and also show the DB support available in OpenShift Express. After building the application, the deployable war is named jboss-as-kitchensink.war and is available on my local file system. My next step would be to deploy it my JBoss AS7 server which we have setup on the OpenShift Express cloud. Let’s see how that’s done. Deploy the application to OpenShift Express Remember, while creating the “application” using the rhc-create-app command, we used the -r option to point to a folder on our local file system to create a local copy of the application repository. That’s the place which will be used now for deploying our applications. In my case, I used /home/jpai/OpenShift/myapps/demo as the repo location. This is how that folder looks like: demo | |--- deployments | |--- pom.xml | |--- README | |--- srcThere’s more than one way to deploy your application to OpenShift Express. One way is to write your code and commit the source code within the src folder of your local repository and then push your changes to the remote git repository. This will then trigger a Maven build for your project on the remote repository. More details about this approach is available in this blog. In our case, we’ll focus on how to deploy an already built Java EE application to your OpenShift Express cloud server. In the previous step, we built the jboss-as-kitchensink.war. Now, copy that war file to the “deployments” subfolder of your local git repository. In this case, it’s /home/jpai/OpenShift/myapps/demo/deployments: cp /home/jpai/jboss-as-quickstarts-7.0.0.Final/kitchensink/target/jboss-as-kitchensink.war /home/jpai/OpenShift/myapps/demo/deploymentsOnce you have copied it here, your next step is to “commit” this change using the git commit command: jpai@jpai-laptop:demo$ git add deployments/jboss-as-kitchensink.war jpai@jpai-laptop:demo$ git commit -m "Deploy kitchensink application" deployments/jboss-as-kitchensink.war [master 1637c21] Deploy kitchensink application 1 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 deployments/jboss-as-kitchensink.warSo at this point your kitchensink application has been commited to your local git repo. Next we should “push” this commit to the remote git repo: jpai@jpai-laptop:openshift$ git push origin master Counting objects: 6, done. Delta compression using up to 2 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 393.71 KiB, done. Total 4 (delta 1), reused 0 (delta 0) remote: Stopping application... remote: done remote: Found .openshift/config/standalone.xml... copying to ... .... .... .... remote: Starting application...done To ssh://6a7ff43a6c2246999de28219a5aaa4ae@jaikiran-jbossas.rhcloud.com/~/git/jaikiran.git/ 6e57976..1637c21 master -> master(trimmed some logs from the above output). So with this “push” we have now deployed our application to the remote OpenShift Express JBoss AS7 server. The jboss-as-kitchensink.war will be deployed at the “jboss-as-kitchensink” web application context. So the URL to access the application will be http://jaikiran-jbossas.rhcloud.com/jboss-as-kitchensink. Go ahead and access that URL. The application does nothing fancy – it allows you to add a user name, email and phone number which will then be stored in the database.Like I mentioned earlier, the kitchensink application uses ExampleDS datasource which is backed by H2 database. So all the data will be stored remotely in the H2 database. Using the MySQL database available in OpenShift Express OpenShift Express sets up a MySQL datasource template for you when you create a jbossas-7.0 application type. The details of the database can be found in the <path-to-local-repo>/.openshift/config/standalone.xml: <subsystem xmlns="urn:jboss:domain:datasources:1.0"> <datasources> <datasource jndi-name="java:jboss/datasources/ExampleDS" enabled="true" use-java-context="true" pool-name="H2DS"> <connection-url>jdbc:h2:${jboss.server.data.dir}/test;DB_CLOSE_DELAY=-1</connection-url> <driver>h2</driver> <pool></pool> <security> <user-name>sa</user-name> <password>sa</password> </security> <validation></validation> <timeout></timeout> <statement></statement> </datasource> <datasource jndi-name="java:jboss/datasources/MysqlDS" enabled="false" use-java-context="true" pool-name="MysqlDS"> <connection-url>jdbc:mysql://127.1.1.1:3306/mysql</connection-url> <driver>mysql</driver> <security> <user-name>admin</user-name> <password>changeme</password> </security> </datasource> <drivers> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> <driver name="mysql" module="com.mysql.jdbc"> <xa-datasource-class>com.mysql.jdbc.jdbc2.optional.MysqlXADataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem>You’ll notice that apart from the ExampleDS that comes by default in AS7, OpenShift Express has setup a MySQL datasource which will be available at java:jboss/datasources/MysqlDS. The important piece to note here is that it is disabled (i.e. enabled=false) by default. Also notice that the password is “changeme”. Basically, this datasource configuration for MysqlDS in the standalone.xml is there as a template. In order to enable that datasource, we first have to create a MySQL database for our application. That can be done by using the following command: jpai@jpai-laptop:openshift$ rhc-ctl-app -a <application-name> -l <email-id-we-used-to-register> -e add-mysql-5.1The rhc-ctl-app is passed the application name (which is the one that we used during rhc-create-app) and also our account id. Additionally, we use the -e option to specify what we want to do. In this case, we issue a “add-mysql-5.1″ command. Running that command will ask you for your account password and on successful completion will show the output similar to: RESULT: Mysql 5.1 database added. Please make note of these credentials: Root User: admin Root Password: as43n34023n Connection URL: mysql://127.1.1.1:3306/Note down the user name, password and the connection url. Now open the <repo-home>/.openshift/config/standalone.xml in a text editor and update the MysqlDS configuration to use the connection URL, the user name and the new password. Also set the enabled flag to “true” so that the datasource is enabled. Ultimately the datasource configuration will look like: <datasource jndi-name="java:jboss/datasources/MysqlDS" enabled="true" use-java-context="true" pool-name="MysqlDS"> <connection-url>jdbc:mysql://127.1.1.1:3306/mysql</connection-url> <driver>mysql</driver> <security> <user-name>admin</user-name> <password>as43n34023n</password> </security> </datasource>Pay attention to the connection-url. It has to be of the format jdbc:mysql://<ip:port>/dbname. Typically, you don’t have to touch that connection-url at all, since the rhc-ctl-app add-mysql-5.1 and the datasource template are in sync with the IP/port. The important pieces to change are the password and the enabled flag. Once this file is updated, save the changes and commit it to your local git repo: jpai@jpai-laptop:demo$ git commit -m "Enable the MysqlDS and fix the password" ./ [master dd7b58a] Fix the datasource password 1 files changed, 1 insertions(+), 1 deletions(-) Push these changes to remote repo: jpai@jpai-laptop:openshift$ git push origin master Counting objects: 9, done. Delta compression using up to 2 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (5/5), 494 bytes, done. Total 5 (delta 2), reused 0 (delta 0) remote: Stopping application... remote: done .... ..... remote: Starting application...done To ssh://6a7ff43a6c2246999de28219a5aaa4ae@jaikiran-jbossas.rhcloud.com/~/git/jaikiran.git/ 2d38fa8..dd7b58a master -> masterSo we now have added MySQL DB and enabled the MysqlDS datasource which is available at java:jboss/datasources/MysqlDS jndi name on the server. So if the kitchensink application has to use MySQL as its database, instead of H2, then all it has to do is use the java:jboss/datasources/MysqlDS. Let’s now edit the persistence.xml file that we saw earlier and use the MysqlDS instead: <?xml version="1.0" encoding="UTF-8"?> <persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistencehttp://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"> <persistence-unit name="primary"> <!-- Changed to use MysqlDS --> <jta-data-source>java:jboss/datasources/MysqlDS</jta-data-source> <properties> <!-- Properties for Hibernate --> <property name="hibernate.hbm2ddl.auto" value="create-drop" /> <property name="hibernate.show_sql" value="false" /> </properties> </persistence-unit> </persistence>Additionally, just to “show” that this new application has been updated to use MySQL database, I also edited the index.xhtml page of the kitchensink application, to add a message on that page about MySQL database being used: <h3> <span style="color: red;"> This application uses MySQL database as its persistence store </span> </h3>Next, I’ll build the kitchensink application locally using Maven, to reflect these changes and generate the new jboss-as-kitchensink.war. Once built, let’s now again copy it to our local git repo and then commit the change and push it to the remote git repo: jpai@jpai-laptop:kitchensink$ cp target/jboss-as-kitchensink.war /home/jpai/OpenShift/myapps/demo/deployments jpai@jpai-laptop:demo$ git commit -m "Use MySQL database for kitchensink application" ./ [master ded2445] Use MySQL database for kitchensink application 1 files changed, 0 insertions(+), 0 deletions(-) jpai@jpai-laptop:openshift$ git push origin master Counting objects: 7, done. Delta compression using up to 2 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 1.35 KiB, done. Total 4 (delta 2), reused 0 (delta 0) remote: Stopping application... remote: done remote: Found .openshift/config/standalone.xml... copying to... ... ... ... remote: Starting application...done To ssh://6a7ff43a6c2246999de28219a5aaa4ae@jaikiran-jbossas.rhcloud.com/~/git/jaikiran.git/ 1637c21..ded2445 master -> master jpai@jpai-laptop:demo$(trimmed some logs from the output) So at this point we have now changed our kitchensink application to use MySQL database and have deployed it to our OpenShift Express AS7 server. So let’s access the application URL again http://jaikiran-jbossas.rhcloud.com/jboss-as-kitchensink. As you see, that page now prominently displays our message about MySQL DB being used. Go ahead and try out that app by adding some dummy user information.That’s it! We have successfully deployed our application to OpenShift Express server and the application is available for use. Summary It’s been a pleasant experience so far with OpenShift. I plan to try out a few more things with OpenShift, in the coming days and blog about any interesting details. Useful resources While working on deploying this application, I had to use some documents and help from the OpenShift community to understand how this is all setup,. Here’s a list of useful resources related to OpenShift in general: OpenShift Express User guide OpenShift forums OpenShift IRC #openshift on irc.freenode.net. The folks here are very helpful! Scott Stark’s blog. Scott’s blogs contain a lot of useful information about AS7 on OpenShift and OpenShift in general. Scott’s blog is definitely a must read! Where to look for help OpenShift questions in general are answered in the OpenShift forums. For questions around AS7 on OpenShift, the best place to ask questions is the JBoss Cloud Group. Reference: OpenShift Express: Deploy a Java EE application (with AS7 support) from our JCG partner Jaikiran Pai at the Jaikiran My Wiki blog. Related Articles :Red Hat Openshift: Getting started – Java EE6 in the Cloud Oracle WebLogic Java Cloud Service – Behind the scenes. Java EE Past, Present, & Cloud 7 Developing and Testing in the Cloud From Spring to Java EE 6...
java-logo

Profile your applications with Java VisualVM

When you need to discover what part of an application consume the more CPU or Memory, you must use a profiler to do that. One profiler, packed by default with the Sun JDK is Java VisualVM. This profiler is really simple to use and really powerful.In this post, we’ll see how to install it and use it to profile an application. Normally, to install it, you have nothing to do, because, it’s installed with the JDK. But in several Unix system, like Ubuntu, this is not the case. If you want to install it, just use apt-get (or aptitude) : sudo apt-get install visualvm To launch it just launch jvisualvm (jvisualvm.exe in the bin directory of the jdk for Windows). That will open the following window :There is not a lot of interesting things to see here. To profile a application, you just have to launch it and VisualVM will detect it as launched :After that, you just have to double click to view information about your running application. You’ve four tabs available for your applications (Overview, Monitor, Threads, Profiler). We’ll see all that 4 tabs. First of all, the default tab, the overview :This tab contains the main informations about the launched application. You can see the main class, the arguments of the command line, the JVM arguments. You can also see what kind of JVM is running your program and where the JVM is located. And you can see all the properties set in the program. A more interesting tab is the “Monitor” tab :This tab follow the CPU and Memory usages of your applications. You have 4 graphs in this view. The first one, from left to right, up to down, display the CPU usage and the Garbage Collector CPU usage. The second graph display the usage of the heap space and the PermGen space. The next graph display the total number of classes loaded in the application and the last one displays the number of threads currently running. With these graphs, you can see if your application take too CPU or if there is too memory used by the application. The third tab provide some details about Threads :In this view, you can see how the different threads of the application are changing state and how they evolve. You can also see the time each time pass in each state and you can have details about the threads you want. And now, I think the most interesting tab, is the Profiler one :When you open this tab first, it contains no information at all. You must start one kind of profiling before seeing informations. We’ll start with CPU profiling. Just click on the CPU button the instrumentation will start. During the instrumentation, the application will be blocked. After the instrumentation, you can access the application again and you will see the results of the profiling be displayed in the table. Of course the profiling has an overhead on your application. Normally it’s not visible, but for certain applications, you can loose a lot of fluidity. Here are the results I have obtained with my simple application :In my example, we can see that the waitForTimeout method takes 81.6% of the CPU time. We can also see that the notifyDecision and getSensor methods are the two next most CPU consuming methods, perhaps, it will be interesting to optimize them. You can also look at the number of invocations of each, perhaps, you’ll find a method that is invoked too much time. The next profiling we can do is the Memory profiling. Here again, you have to start the profiling and the instrumentation will start and during this, the application will be frozen. Here are the results for my application :Here we can see that this application store some big double[] and float[] arrays and that EllipseIterator and BasicStroke classes take also a lot of memory spaces. In both the memory and CPU profiling, you can save the results to a file to see it later. By example, you can let an application working the all night, save the results the morning and examine them or make three profiling and compare the three. To conclude, I have to say that this profiler is really simple but also really powerful to use. We’ve the main features we want for a profiler and the results are really good. That kind of tools can really help you to improve an application to use less CPU and memory. Of course that kind of tool doesn’t do everything, it just help showing what part of the application must be improved, the improvement part is the task of the developer and it’s not the easiest one. But having that kind of tool is a good start. Reference: Profile your applications with Java VisualVM from our JCG partner Baptiste Wicht at the @Blog(“Baptiste Wicht”). Related Articles :Monitoring OpenJDK from the CLI Performance Anxiety – on Performance Unpredictability, Its Measurement and Benchmarking JVM options: -client vs -server Low GC in Java: Use primitives instead of wrappers...
java-logo

Java 8 Status Updates

The two big new language features of the upcoming Java SE 8 release are Lambda Expressions and Modularity. For both, status updates have been released these days. I’ll share the links with you, so you might read through them over the holidays The Java SE 8 release is planned for mid 2013 by Oracle. Project Lambda Project Lambda as well as the JSR-335 wants to provide means for modelling of code as data in Java – in non-exact, colloquial words one could say it aims for functions as first-class objects in Java. To do so, Project lambda wants to provide the following four extensions to the Java language:Lambda Expressions or Closures which allow the programmer to specify a piece of executable code in an idomatic way. They can be stored in a variable, passed to a method as argument or used as return value of a method. Expandend Target Typing to bind the Lambda Expressions to objects of a specific type (type inference). These types can be so-called Function Interfaces – Java interfaces with exactly one method. Method and Constructor References to allow the programmer to use existing methods on objects to be bound to a a Function Interface. Default or Virtual Extension Methods to add more methods to existing interfaces without breaking existing implementations (especially in the collection library).To give you an idea, here is a piece of code using anonymous inner-classes for some collection logic. List students = // ... students.filter(new FilerFunction(){ @Override public boolean filter(Student s){ return s.getEntryYear() == 2011; } }) .map(new MapFunction<Student,Integer>(){ @Override public Integer map(Student s){ return s.getGrade(); } }) .reduce(new ReduceFunction<Integer>(){ @Override public Integer reduce(Integer value1, Integer value2){ Math.max(value1, value2); } });In contrast the following code using the features on their way with Project Lambda: List students = // ... students.paralell() .filter(s -> s.getEntryYear() == 2011) .map(s -> s.getGrade()) .reduce(Math::max); The information about the current state from Specification Lead and OpenJDK Project Lead Brian Goetz can be found at State of the Lambda.Project Jigsaw – Modularity for the Java Platform In Project Jigsaw, the OpenJDK community lead by Oracle tries to introduce modularity into Java the language. The approach will be different from e.g. OSGi, because they want to establish it on the language level – with static compile time checking. The Oracle people always say they strive for compatibility of Jigsaw with OSGi. Marc Reinhold, Oracles Chief Platform Architect and OpenJDK Project Lead, describes three principles of the modularity approach:Modularity is a language construct – The best way to support modular programming in a standard way in the Java platform is to extend the language itself to support modules. Developers already think about standard kinds of program components such as classes and interfaces in terms of the language; modules should be just another kind of program component. Module boundaries should be strongly enforced – A class that is private to a module should be private in exactly the same way that a private field is private to a class. In other words, module boundaries should determine not just the visibility of classes and interfaces but also their accessibility. Without this guarantee it is impossible to construct modular systems capable of running untrusted code securely. Static, single-version module resolution is usually sufficient – Most applications do not need to add or remove modules dynamically at run time, nor do they need to use multiple versions of the same module simultaneously. The module system should be optimized for common scenarios but also support narrowly-scoped forms of dynamic multi-version resolution motivated by actual use cases such as, e.g., application servers, IDEs, and test harnesses.For the programmer using Jigsaw, it will be especially noticable because the language will now have three phases (instead of two):Compile Time: The classes of a module are compiled. The compiled classes together with the resources (configuration files, metadata files etc.) are packed together in an archive in the format of JMOD (for java module): Install Time: On any computer having the JRE installed, there will be a module library. Here the user can install java modules. Run Time: A module defining a main class (Invokable Module) can be executed. The JVM will load this module and any module it requires from the module library and then execute the code.Information about the the current state of Project Jigsaw from Marc Reinhold can be found at Project Jigsaw: The Big Picture — DRAFT 1. Reference: Java 8 Status Updates from our JCG partner Johannes Thönes  at the Johannes Thönes blog. Related Articles :Java 7: Project Coin in code examples Java 8 virtual extension methods Java Lambda Syntax Alternatives Moving Java Forward? A definition. A year in review. Java SE 7, 8, 9 – Moving Java Forward Java 7 Feature Overview...
devops-logo

Closed loops: the secret to collecting configuration management data

In my last post, How NOT to collect configuration management data, I gave a quick rundown of some losing CM data approaches that I and others have attempted in the past. Most of these approaches were variants of asking people for information, putting their answers in documents somewhere and never looking at the documents again. This time around I’m going to describe a key breakthough that our team finally made–one that made it a lot easier to collect and update the data in question. That breakthrough was the concept of a closed loop and how it relates to configuration management. [As it happens, the concept is well-known in configuration management circles, but at the time it wasn't well-known to us. So we discovered something that other people already knew. It's in that limited sense I say we made a breakthrough.] We’re going to have to build up to the closed loop concept. Let’s start by looking at who has the best CM data in the org. Who has the best CM data? Different orgs are different, so it’s tough to make blanket statements about who has the best CM data. But what I can do is give the answer for my workplace, and hopefully the principles will make sense even if the reality is different where you work. But I’ll bet for a lot of orgs it’s quite similar. To avoid keeping you in suspense, the answer is… Winner: the release team. Where I work, the release team has the best CM data, where “best” means something like comprehensive, accurate and actively managed. The release team knows which apps go on which servers, which service accounts to launch Tomcat or JBoss under, which alerts to suppress during a deployment, and so on. They know these things across the entire range of apps they support. It’s all documented (in YAML files, databases, scripts, etc.) and it’s all maintained in real time. Let’s look at some other teams though.App teams have nonsystematic data. The app teams typically know the URLs for their apps, which servers their apps are on (or at least the VIPs), the interdependencies between their apps and other adjacent systems (web services, databases). But the knowledge is less systematic. It’s more like browser bookmarks, tribal knowledge and not-quite-up-to-date wiki pages. And any given developer knows his own app and maybe the last app or two he worked on, but not all apps. Ops teams have to depend on busy developers for info. The ops teams have better or worse information depending on how close to the app teams they are. The team in the NOC is almost entirely at the mercy of the app teams to write and update knowledge base articles. As you might imagine, it can be a challenge to ensure that developers up against a deadline are writing solid KB articles and maintaining older ones. For the NOC it’s very important to know who the app SMEs are, given such challenges. Even this is not always readily clear as org changes occur, new apps appear, old apps get new names and so on. App support teams are more expertise-driven than data-driven. The app support teams (they handle escalations out of the NOC) are generally more familiar with the apps themselves and so build up stronger knowledge about the apps, but this knowledge tends to be stronger with “problem child” apps. Also, different people tend to develop expertise with specific apps.Why does the release team have the best CM data? The release team has the best CM data because properly maintained CM data is fundamental to their job in a way that it’s not for other teams. First, a quick aside. If your company isn’t regulated by SOX, you may be wondering about what a release team is and why we have one. Among many other things, SOX requires a separation of duties between people who write software and people who deploy/support it in the production environment. The release team’s primary responsibility is to release software into production. We actually have a couple of release teams, and each of them services many apps. It would not be feasible from a cost and utilization perspective for each app team to have its own dedicated release engineer. The release teams release software at low-volume times, generally during the wee hours. Back to this idea that the release team needs proper CM data more than the other teams do. Why am I saying that? Here’s why. The software development team is highly motivated to release software at a regular cadence. A fairly limited number of release engineers must service hundreds of applications (generally not all at once, though), so “tribal knowledge” isn’t a viable strategy when it comes to knowing what to deploy where. It must be thoroughly and accurately documented. Releases happen every week, and late at night, so it’s not reasonable for the release team to call up his buddy on the app team and ask for the list of app servers. The release team needs this information at their fingertips. If they don’t have it, the software organization fails to realize the value of its development investment. Indeed, “documented” is the wrong word here, because deployment automation drives the deployments. The CM data must be properly “operationalized”, meaning that it must be consumable by automation. No Word docs, no Excel spreadsheets, no wiki pages. More like YAML files, XML files, web service calls against a CMDB, etc. Importantly, when the data is wrong, the deployment fails. People really care about deployments not failing, so if there are data problems, people will definitely discover and fix them. Let’s look at the app and ops teams again.App teams can make do without great CM data. The dependency of app developers on their CM data is softer. Yes, a developer needs to know which web services his app calls, but someone just explains that when he joins the project, and that’s really all there is to it. If he has a question about a transitive dependency, he might ask a teammate. If he needs to get to the app in the test environment, he probably has the URL bookmarked and the credentials recorded somewhere, but if not, he can easily ask somebody. 99% of the time, the developer can do what he needs to do without reference to specific CM data points. The developer may or may not automate against the CM data. Ops/support teams need good CM data, but expertise is cheaper in the short- to medium-term. Except in cases involving very aggressive SLAs, even ops often has a softer dependency on CM data than the release team does. Since (hopefully) app outages occur much less frequently than app deployments, the return on knowledge base investments is more sporadic than that on deployment automation. If the app in question isn’t particularly important, investments in KB articles may be very limited indeed. In most cases, investing in serious support firepower (when something breaks, bring significant subject matter expertise to bear on the problem) yields a better short- to medium-term return. (Of course, in the longer term this strategy fails, because eventually there will be the very costly outage that takes the business out for several days. That’s a subject for a different day.)Now we’re in a good place to understand closed loops and why they’re so important for configuration management data. Closed loops and why they matter I think of closed loops like this. There’s a “steady state” that we want to establish and maintain with respect to our CM data. We want it to be comprehensive, accurate and relevant. When the state of our CM data diverges from that desired steady state, we want feedback loops to alert us to the situation so we can address it. That’s a closed loop. Example 1: deployment automation. The best example is the one that we already described: deployment data. Deployment data drives the deployment process, and when the data is wrong, the deployment process fails. Because the deployment process is extremely important to the organization, some level of urgency attaches to fixing wrong data. But it’s not just wrong data. If we need to deploy an app and there’s missing data in the CMDB, then sorry, there’s no deployment! Rest assured that if the deployment matters, the missing data is only a temporary issue. Example 2: fine-grained access controls. Here’s another example: team membership data. We’ve already noted that for operational reasons it’s very important to know who is on which development team. This isn’t something that’s going to be in the HR system, and people have better things to do than to update team membership data. But what happens when that team membership data drives ACLs for something you care about being able to do, like deploying your app to your dev environment? Now you’re going to see better team membership data. The basic concept is to find something that people really, really care about, and then make it strongly dependent on having good CM data:Ideally it’s best if the CM data drives an automated process that people care about, but that’s not strictly necessary. In my org, for instance, there’s a fairly robust but manual goal planning and goal tracking process. Every quarter the whole department goes through a goal planning process (my goals roll up into my boss’ goals and so on), and then we track progress against those goals every couple of weeks. The goal planning and tracking app depends requires correct information about who’s on which team, and so this serves to establish yet another closed loop on the team membership data. It also illustrates the point that you can hit the same type of data with multiple loops. Design your CM strategy holistically There are several areas in technology where it pays to take a holistic view of design: security, user experience and system testing come immediately to mind. In each case you consider a given technical system in its wider organizational context. (Super-duper-strong password requirements don’t help if people have to write them down on Post-Its.) Configuration management is another place where it makes lots of sense to take a holistic approach to design. For any given type of data (there’s no one-size-fits-all answer here), try to figure out something important that depends on it, and then figure out how to tie that something to your data so that wheels just start falling off if the data is wrong, incomplete and so on. Again, data-driven automated processes are superior here, but any important process (automated or not) will help. Fewer meetings? Almost forgot. In the last post, I mentioned that I’ll equip you to get out of some pointless meetings. The meetings in question are the ones where somebody wants to get together with you to collect CM data from you so they can post it to their Sharepoint site. Decline those–they’re just a waste of time. Insist that people be able to articulate the closed loops that they will be creating to make sure that someone discovers gaps and errors in the data. I’ve been in plenty of such meetings, and in some cases they’re set up as half- or full-day meetings. I don’t do those anymore. I’m working on an open source CMDB, called Skybase, that can help you establish closed loop configuration management. See the Skybase GitHub site. Reference: Closed loops: the secret to collecting configuration management data  from our JCG partner Willie Wheeler at the Skydingo blog. Related Articles :Devops: How NOT to collect configuration management data Devops has made Release and Deployment Cool How to solve production problems GlassFish Response GZIP Compression in Production...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close