Featured FREE Whitepapers

What's New Here?

java-logo

Using EasyMock or Mockito

I have been using EasyMock for most of time but recently I worked with a few people who were pretty much inclined to use Mockito. Not intending to use two frameworks for the same purpose in the same project I adopted Mockito. So for the last couple of months I have been using Mockito and here is my comparative analysis of the two.The people with whom I have work cite reasons of test readability for using Mockitio but I have a different opinion on the same. Suppose we have the following code that we intend to test : public class MyApp { MyService service; OtherService otherService;void operationOne() { service.operationOne(); }void operationTwo(String args) { String operationTwo = otherService.operationTwo(args); otherService.operationThree(operationTwo); }void operationThree() { service.operationOne(); otherService.operationThree("success"); } }class MyService { void operationOne() {} }class OtherService { public String operationTwo(String args) { return args; }public void operationThree(String operationTwo) {} } Now let me write a simple test case for this class using EasyMock and the using Mockito. public class MyAppEasyMockTest { MyApp app; MyService service; OtherService otherService;@Before public void initialize() { service = EasyMock.createMock(MyService.class); otherService = EasyMock.createMock(OtherService.class); app = new MyApp(); app.service = service; app.otherService = otherService; }@Test public void verifySimpleCall() { service.operationOne(); EasyMock.replay(service); app.operationOne(); EasyMock.verify(service); } } public class MyAppMockitoTest { MyApp app; MyService service; OtherService otherService;@Before public void initialize() { service = Mockito.mock(MyService.class); otherService = Mockito.mock(OtherService.class); app = new MyApp(); app.service = service; app.otherService = otherService; }@Test public void verifySimpleCall() { app.operationOne(); Mockito.verify(service).operationOne(); } } This is a really simple test and I must say the Mockito one is more readable . But according to the classic testing methodology the Mockito test is not complete. We have verified the call that we are looking for but if tomorrow I change the source code by adding one more call to service the test would not break. void operationOne() { service.operationOne(); service.someOtherOp(); } Now this makes me feel that the tests are not good enough. But thankfully Mockito gives the verifyNoMoreInteractions that can be used to complete the test. Now let me write a few more test for the MyApp class. public class MyAppEasyMockTest { @Test public void verifyMultipleCalls() { String args = "one"; EasyMock.expect(otherService.operationTwo(args)).andReturn(args); otherService.operationThree(args); EasyMock.replay(otherService); app.operationTwo(args); EasyMock.verify(otherService); }@Test(expected = RuntimeException.class) public void verifyException() { service.operationOne(); EasyMock.expectLastCall().andThrow(new RuntimeException()); EasyMock.replay(service); app.operationOne(); }@Test public void captureArguments() { Capture<String> captured = new Capture<String>(); service.operationOne(); otherService.operationThree(EasyMock.capture(captured)); EasyMock.replay(service, otherService); app.operationThree(); EasyMock.verify(service, otherService); assertTrue(captured.getValue().contains("success")); }}public class MyAppMockitoTest { @Test public void verifyMultipleCalls() { String args = "one"; Mockito.when(otherService.operationTwo(args)).thenReturn(args); app.operationTwo(args); Mockito.verify(otherService).operationTwo(args); Mockito.verify(otherService).operationThree(args); Mockito.verifyNoMoreInteractions(otherService); Mockito.verifyZeroInteractions(service); }@Test(expected = RuntimeException.class) public void verifyException() { Mockito.doThrow(new RuntimeException()).when(service).operationOne(); app.operationOne(); }@Test public void captureArguments() { app.operationThree(); ArgumentCaptor capturedArgs = ArgumentCaptor .forClass(String.class); Mockito.verify(service).operationOne(); Mockito.verify(otherService).operationThree(capturedArgs.capture()); assertTrue(capturedArgs.getValue().contains("success")); Mockito.verifyNoMoreInteractions(service, otherService); } } These are some practical scenarios of testing where we would like to assert arguments, Exceptions etc. If I look and compare the ones written using EasyMock with the ones using Mockito I tend to feel that both the tests are equal in readability, none of them do a better task.The large number of expect and return calls in EasyMock make the tests not readable and the verify statements of Mockito often compromise over test readility. As per the documentation of Mockito the verifyZeroInteractions, verifyNoMoreInteractions should not be used in every test that you write but if I leave them out of my tests then my tests are not good enough.Moreover in tests everything thing should be under the control of the developer i.e. how the interaction are happening and what interactions are happening. In EasyMock this aspect is more visible as the devloper must put down all of these interaction in his code but in Mockito, the framework takes care of all interactions and the developer is just concerned with their verification ( if any). But this can lead to testing scenarios where the developer is not under control of all interactions.There are few nice things that Mockito has like the JunitRunner that can be used to create Mocks of all the required dependencies. It is a nice way of removing some of the infrastructure code and EasyMock should also have one. @RunWith(MockitoJUnitRunner.class) public class MyAppMockitoTest { MyApp app; @Mock MyService service; @Mock OtherService otherService;@Before public void initialize() { app = new MyApp(); app.service = service; app.otherService = otherService; } } Conclusion:Since I have used both frameworks, I feel that except for simple test cases both EasyMock and Mockito lead to test cases that equal in readability. But EasyMock is better for the unit testing as it forces the developer to take control of things. Mockito due to its assumptions and considerations hides this control under the carpet and thus is not a good choice. But Mockito offers certaing things that are quite useful(eg. junitRunner, call chaining) and EasyMock should have one in its next release. Reference: using EasyMock or Mockito from our JCG partner Rahul Sharma at the The road so far… blog blog. ...
enterprise-java-logo

ADF : Dynamic View Object

Today I want to write about dynamic view object which allow me to change its data source (SQL query) and attributes at run time. I will use oracle.jbo.ApplicationModule :: createViewObjectFromQueryStmt method to do this issue. I will present how to do this step by step Create View Object and Application Module   1- Right click on Model project and choose New2- Choose from left pane “ADF Business Component” , then from list choose “View Object” and click “OK” button3- Enter “DynamicVO” in “Name” and choose “Sql Query” radio button and click “Next” button.4- Write in Select field “select * from dual” and click “Next” button until reach Window “Step 8 of 9″  5- Check “Add to Application Module” check box and click “Finish” button.Implement Changes in Application Module 1- Open application module “AppModule”, then open Java tab and check “Generate Application Module Class AppModuleImpl” check box2- Open AppModuleImpl.java Class and Add the below method for dynamic view object public void changeDynamicVoQuery(String sqlStatement) { ViewObject dynamicVO = this.findViewObject("DynamicVO1"); dynamicVO.remove(); dynamicVO = this.createViewObjectFromQueryStmt("DynamicVO1", sqlStatement); dynamicVO.executeQuery(); }3- Open “AppModule” then open Java tab and Add changeDynamicVoQuery method to Client InterfaceTest Business Component   1- Right click on AppModue in Application navigator and choose Run from drop down list.2- Right click on AppModule in left pane and choose Show from drop down lsit Write “Select * from Emp” in sqlStatement parameter Click on Execute button, The result will be Success .3- Click double click on DynamicVO1 in left pane, it will display the data of DynamicVO and display data which I entered “Select * from Emp” before not “Select * from dual” that was used in design time of view object.  To use dynamic view objects in ADF Faces, you should use ADF Dynamic Table or ADF Dynamic Form. You can download sample application from here Reference: ADF : Dynamic View Object from our JCG partner Mahmoud A. ElSayed at the Dive in Oracle blog....
software-development-2-logo

Bug Fixing – to Estimate, or not to Estimate, that is the question

According to Steve McConnell in Code Complete (data from 1975-1992) most bugs don’t take long to fix. About 85% of errors can be fixed in less than a few hours. Some more can be fixed in a few hours to a few days. But the rest take longer, sometimes much longer – as I talked about in an earlier post. Given all of these factors and uncertainty, how to you estimate a bug fix? Or should you bother? Block out some time for bug fixing Some teams don’t estimate bug fixes upfront. Instead they allocate a block of time, some kind of buffer for bug fixing as a regular part of the team’s work, especially if they are working in time boxes. Developers come back with an estimate only if it looks like the fix will require a substantial change – after they’ve dug into the code and found out that the fix isn’t going to be easy, that it may require a redesign or require changes to complex or critical code that needs careful review and testing. Use a rule of thumb placeholder for each bug fix Another approach is to use a rough rule of thumb, a standard place holder for every bug fix. Estimate ½ day of development work for each bug, for example. According to this post on Stack Overflow the ½ day suggestion comes from Jeff Sutherland, one of the inventors of Scrum. This place holder should work for most bugs. If it takes a developer more than ½ day to come up with a fix, then they probably need help and people need to know anyways. Pick a place holder and use it for a while. If it seems too small or too big, change it. Iterate. You will always have bugs to fix. You might get better at fixing them over time, or they might get harder to find and fix once you’ve got past the obvious ones. Or you could use the data earlier from Capers Jones on how long it takes to fix a bug by the type of bug. A day or half day works well on average, especially since most bugs are coding bugs (on average 3 hours) or data bugs (6.5 hours). Even design bugs on average only take little more than a day to resolve. Collect some data – and use it Steve McConnell, In Software Estimation: Demystifying the Black Art says that it’s always better to use data than to guess. He suggests collecting time data for as little as a few weeks or maybe a couple of months on how long on average it takes to fix a bug, and use this as a guide for estimating bug fixes going forward. If you have enough defect data, you can be smarter about how to use it. If you are tracking bugs in a bug database like Jira, and if programmers are tracking how much time they spend on fixing each bug for billing or time accounting purposes (which you can also do in Jira), then you can mine the bug database for similar bugs and see how long they took to fix – and maybe get some ideas on how to fix the bug that you are working on by reviewing what other people did on other bugs before you. You can group different bugs into buckets (by size – small, medium, large, x-large – or type) and then come up with an average estimate, and maybe even a best case, worst case and most likely for each type. Use Benchmarks For a maintenance team (a sustaining engineering or break/fix team responsible for software repairs only), you could use industry productivity benchmarks to project how many bugs your team can handle. Capers Jones in Estimating Software Costs says that the average programmer (in the US, in 2009), can fix 8-10 bugs per month (of course, if you’re an above-average programmer working in Canada in 2012, you’ll have to set these numbers much higher). Inexperienced programmers can be expected to fix 6 a month, while experienced developers using good tools can fix up to 20 per month. If you’re focusing on fixing security vulnerabilities reported by a pen tester or a scan, check out the remediation statistical data that Denim Group has started to collect, to get an idea on how long it might take to fix a SQL injection bug or an XSS vulnerability. So, do you estimate bug fixes, or not? Because you can’t estimate how long it will take to fix a bug until you’ve figured out what’s wrong, and most of the work in fixing a bug involves figuring out what’s wrong, it doesn’t make sense to try to do an in-depth estimate of how long it will take to fix each bug as they come up. Using simple historical data, a benchmark, or even a rough guess place holder as a rule-of-thumb all seem to work just as well. Whatever you do, do it in the simplest and most efficient way possible, don’t waste time trying to get it perfect – and realize that you won’t always be able to depend on it. Remember the 10x rule – some outlier bugs can take up to 10x as long to find and fix than an average bug. And some bugs can’t be found or fixed at all – or at least not with the information that you have today. When you’re wrong (and sometimes you’re going to be wrong), you can be really wrong, and even careful estimating isn’t going to help. So stick with a simple, efficient approach, and be prepared when you hit a hard problem, because it’s gonna happen. Reference: Bug Fixing – to Estimate, or not to Estimate, that is the question from our JCG partner Jim Bird at the Building Real Software blog....
apache-mahout-logo

Mahout and Scalding for poker collusion detection

When I’ve been reading a very bright book on Mahout, Mahout In Action (which is a great hands-in intro to machine learning, as well), one of the examples has caught my attention. Authors of the book where using well-known K-means clusterization algorithm for finding similar players on stackoverflow.com, where the criterion of similarity was the set of the authors of questions/answers the users were up-/downvoting. In a very simple words, K-means algorithm iteratively finds clusters of points/vectors, located close to each other, in a multidimensional space. Being applied to the problem of finding similars players in StackOverflow, we assume that every axis in the multi-dimensional space is a user, where the distance from zero is a sum of points, awarded to the questions/answers given by other players (those dimensions are also often called “features”, where the the distance is a “feature weight”). Obviously, the same approach can be applied to one of the most sophisticated problems in a massively-multiplayer online poker – collusion detection. We’re making a simple assumption that if two or more players have played too much games with each other (taking into account that any of the players could simply have been an active player, who played a lot of games with anyone), they might be in a collusion. We break a massive set of players into a small, tight clusters (preferably, with 2-8 players in each), using K-means clustering algorithm. In a basic implementation that we will go through further, every user is represented as a vector, where axises are other players that she has played with (and the weight of the feature is the number of games, played together). Stage 1. Building a dictionary As the first step, we need to build a dictionary/enumeration of all the players, involved in the subset of hand history that we analyze: // extract user ID from hand history record val userId = (playerHistory: PlayerHandHistory) => new Text(playerHistory.getUserId.toString)// Builds basic dixtionary (enumeration, in fact) of all the players, participated in the selected subset of hand // history records class Builder(args: Args) extends Job(args) {// input tap is an HTable with hand history entries: hand history id -> hand history record, serialized with ProtoBuf val input = new HBaseSource("hand", args("hbasehost"), 'handId, Array("d"), Array('blob)) // output tap - plain text file with player IDs val output = TextLine(args("output"))input .read .flatMap('blob -> 'player) { // every hand history record contains the list of players, participated in the hand blob: Array[Byte] => // at the first stage, we simply extract the list of IDs, and add it to the flat list HandHistory.parseFrom(blob).getPlayerList.map(userId) } .unique('player) // remove duplicate user IDs .project('player) // leave only 'player column from the tuple .write(output)} 1003 1004 1005 1006 1007 ...Stage 2. Adding indices to the dictionary Secondly, we map user IDs to position/index of a player in the vector. class Indexer(args: Args) extends Job(args) {val output = WritableSequenceFile(args("output"), classOf[Text], classOf[IntWritable], 'userId -> 'idx)TextLine(args("input")).read .map(('offset -> 'line) -> ('userId -> 'idx)) { // dictionary lines are read with indices from TextLine source // out of the box. For some reason, in my case, indices were multiplied by 5, so I have had to divide them tuple: (Int, String) => (new Text(tuple._2.toString) -> new IntWritable((tuple._1 / 5))) } .project(('userId -> 'idx)) // only userId -> index tuple is passed to the output .write(output)} 1003 0 1004 1 1005 2 1006 3 1007 4 ... Stage 3. Building vectors We build vectors that will be passed as an input to K-means clustering algorithm. As we noted above, every position in the vector corresponds to another player the player has played with: /** * K-means clustering algorithm requires the input to be represented as vectors. * In out case, the vector, itself, represents the player, where other users, the player has played with, are * vector axises/features (the weigh of the feature is a number of games, played together) * User: remeniuk */ class VectorBuilder(args: Args) extends Job(args) {import Dictionary._// initializes dictionary pipe val dictionary = TextLine(args("dictionary")) .read .map(('offset -> 'line) -> ('userId -> 'dictionaryIdx)) { tuple: (Int, String) => (tuple._2 -> tuple._1 / 5) } .project(('userId -> 'dictionaryIdx))val input = new HBaseSource("hand", args("hbasehost"), 'handId, Array("d"), Array('blob)) val output = WritableSequenceFile(args("output"), classOf[Text], classOf[VectorWritable], 'player1Id -> 'vector)input .read .flatMap('blob -> ('player1Id -> 'player2Id)) { //builds a flat list of pairs of users that player together blob: Array[Byte] => val playerList = HandsHistoryCoreInternalDomain.HandHistory.parseFrom(blob).getPlayerList.map(userId) playerList.flatMap { playerId => playerList.filterNot(_ == playerId).map(otherPlayerId => (playerId -> otherPlayerId.toString)) } } .joinWithSmaller('player2Id -> 'userId, dictionary) // joins the list of pairs of //user that played together with // the dictionary, so that the second member of the tuple (ID of the second //player) is replaced with th index //in the dictionary .groupBy('player1Id -> 'dictionaryIdx) { group => group.size // groups pairs of players, played together, counting the number of hands } .map(('player1Id, 'dictionaryIdx, 'size) ->('playerId, 'partialVector)) { tuple: (String, Int, Int) => val partialVector = new NamedVector( new SequentialAccessSparseVector(args("dictionarySize").toInt), tuple._1) // turns a tuple of two users // into a vector with one feature partialVector.set(tuple._2, tuple._3) (new Text(tuple._1), new VectorWritable(partialVector)) } .groupBy('player1Id) { // combines partial vectors into one vector that represents the number of hands, //played with other players group => group.reduce('partialVector -> 'vector) { (left: VectorWritable, right: VectorWritable) => new VectorWritable(left.get.plus(right.get)) } } .write(output)} 1003 {3:5.0,5:4.0,6:4.0,9:4.0} 1004 {2:4.0,4:4.0,8:4.0,37:4.0} 1005 {1:4.0,4:5.0,8:4.0,37:4.0} 1006 {0:5.0,5:4.0,6:4.0,9:4.0} 1007 {1:4.0,2:5.0,8:4.0,37:4.0} ...The entire workflow, required to vectorize the input: val conf = new Configuration conf.set("io.serializations", "org.apache.hadoop.io.serializer.JavaSerialization," + "org.apache.hadoop.io.serializer.WritableSerialization")// the path, where the vectors will be stored to val vectorsPath = new Path("job/vectors") // enumeration of all users involved in a selected subset of hand history records val dictionaryPath = new Path("job/dictionary") // text file with the dictionary size val dictionarySizePath = new Path("job/dictionary-size") // indexed dictionary (every user ID in the dictionary is mapped to an index, from 0) val indexedDictionaryPath = new Path("job/indexed-dictionary")println("Building dictionary...") // extracts IDs of all the users, participating in selected subset of hand history records Tool.main(Array(classOf[Dictionary.Builder].getName, "--hdfs", "--hbasehost", "localhost", "--output", dictionaryPath.toString)) // adds index to the dictionary Tool.main(Array(classOf[Dictionary.Indexer].getName, "--hdfs", "--input", dictionaryPath.toString, "--output", indexedDictionaryPath.toString)) // calculates dictionary size, and stores it to the FS Tool.main(Array(classOf[Dictionary.Size].getName, "--hdfs", "--input", dictionaryPath.toString, "--output", dictionarySizePath.toString))// reads dictionary size val fs = FileSystem.get(dictionaryPath.toUri, conf) val dictionarySize = new BufferedReader( new InputStreamReader( fs.open(new Path(dictionarySizePath, "part-00000")) )).readLine().toIntprintln("Vectorizing...") // builds vectors (player -> other players in the game) // IDs of other players (in the vectors) are replaces with indices, taken from dictionary Tool.main(Array(classOf[VectorBuilder].getName, "--hdfs", "--dictionary", dictionaryPath.toString, "--hbasehost", "localhost", "--output", vectorsPath.toString, "--dictionarySize", dictionarySize.toString))Stage 4. Generating n-random clusters Random clusters/centroids is an entry point for K-means algorithm: //randomly selected cluster the will be passed as an input to K-means val inputClustersPath = new Path('jobinput-clusters') val distanceMeasure = new EuclideanDistanceMeasure println('Making random seeds...') //build 30 initial random clusterscentroids RandomSeedGenerator.buildRandom(conf, vectorsPath, inputClustersPath, 30, distanceMeasure)Stage 5. Running K-means algorithms Every next iteration, K-means will find better centroids and clusters. As a result, we have 30 clusters of players that played with each other the most often: // clusterization results val outputClustersPath = new Path("job/output-clusters") // textual dump of clusterization results val dumpPath = "job/dump"println("Running K-means...") // runs K-means algorithm with up to 20 iterations, to find clusters of colluding players (assumption of collusion is // made on the basis of number hand player together with other player[s]) KMeansDriver.run(conf, vectorsPath, inputClustersPath, outputClustersPath, new CosineDistanceMeasure(), 0.01, 20, true, 0, false)println("Printing results...")// dumps clusters to a text file val clusterizationResult = finalClusterPath(conf, outputClustersPath, 20) val clusteredPoints = new Path(outputClustersPath, "clusteredPoints") val clusterDumper = new ClusterDumper(clusterizationResult, clusteredPoints) clusterDumper.setNumTopFeatures(10) clusterDumper.setOutputFile(dumpPath) clusterDumper.setTermDictionary(new Path(indexedDictionaryPath, "part-00000").toString, "sequencefile") clusterDumper.printClusters(null)Results Let’s go to “job/dump”, now – this file contains textual dumps of all clusters, generated by K-means. Here’s a small fragment of the file: VL-0{n=5 c=[1003:3.400, 1006:3.400, 1008:3.200, 1009:3.200, 1012:3.200] r=[1003:1.744, 1006:1.744, 1008:1.600, 1009:1.600, 1012:1.600]} Top Terms: 1006 => 3.4 1003 => 3.4 1012 => 3.2 1009 => 3.2 1008 => 3.2 VL-15{n=1 c=[1016:4.000, 1019:3.000, 1020:3.000, 1021:3.000, 1022:3.000, 1023:3.000, 1024:3.000, 1025:3.000] r=[]} Top Terms: 1016 => 4.0 1025 => 3.0 1024 => 3.0 1023 => 3.0 1022 => 3.0 1021 => 3.0 1020 => 3.0 1019 => 3.0 As we can see, 2 clusters of players have been detected: one with 8 players, that has played a lot of games with each other, and the second with 4 players. Reference: Poker collusion detection with Mahout and Scalding from our JCG partner Vasil Remeniuk at the Vasil Remeniuk blog blog....
jboss-hibernate-logo

Hibernate caches basics

Recently I have experimented with hibernate cache. In this post I would like share my experience and point out some of the details of Hibernate Second Level Cache. On the way I will direct you to some articles that helped me implement the cache. Let’s get started from the ground. Caching in hibernate Caching functionality is designed to reduces the amount of necessary database access. When the objects are cached they resides in memory. You have the flexibility to limit the usage of memory and store the items in disk storage.The implementation will depend on the underlying cache manager. There are various flavors of caching available, but is better to cache non-transactional and read-only data. Hibernate provides 3 types of caching. 1. Session Cache The session cache caches object within the current session. It is enabled by default in Hibernate. Read more about Session Cache . Objects in the session cache resides in the same memory location. 2. Second Level Cache The second level cache is responsible for caching objects across sessions. When this is turned on, objects will be first searched in cache and if they are not found, a database query will be fired. Read here on how to implement Second Level Cache. Second level cache will be used when the objects are loaded using their primary key. This includes fetching of associations. In case of second level cache the objects are constructed and hence all of them will reside in different memory locations. 3. Query Cache Query Cache is used to cache the results of a query. Read here on how to implement query cache.When the query cache is turned on, the results of the query are stored against the combination query and parameters. Every time the query is fired the cache manager checks for the combination of parameters and query. If the results are found in the cache they are returned other wise a database transaction is initiated. As you can see, it is not a good idea to cache a query if it has number of parameters or a single parameter can take number of values. For each of this combination the results are stored in the memory. This can lead to extensive memory usage. Finally, here is a list of good articles written on this topic, 1. Speed Up Your Hibernate Applications with Second-Level Caching 2. Hibernate: Truly Understanding the Second-Level and Query Caches 3. EhCache Integration with Spring and Hibernate. Step by Step Tutorial 4. Configuring Ehcache with hibernate Reference: All about Hibernate Second Level Cache from our JCG partner Manu PK at the The Object Oriented Life blog....
java-logo

Java threads: How many should I create

Introduction“How many threads should I create?”. Many years before one of my friends asked me the question, then I gave him the answer follow the guideline with ” Number of CPU core + 1″. Most of you will be nodding when you are reading here. Unfortunately all of us are wrong at that point.Right now I would give the answer with if your archiecture was based on shared resource model then your thread number should be “Number of CPU core + 1″ with better throughput, but if your architecture was shared-nothing model (like SEDA, ACTOR) then you could create as many thread as your need. Walk ThroughSo here came one question why so many eldership continuely gave us the guideline with “Number of Cpu core + 1″, because they told us the context switching of thread was heavy and would block your system scalability. But noboday noticed the programming or architecture model they were under. So if you read carefully you would find most of them described the pragramming or architecture model were based on shared resource model.Give you several examples:1. Socket programming – socket layer was shared by many requests, so you need context switch between every requests.2. Information provider system – most customer will contiuely access the same requestetc…So they would meet the multiple requests access the same resource situation so system would require add lock to that resource since consistency requirement of their system. Lock contention would come into play so the context swich of multiple threading would be very heavy.After I find this interesting thing, I consider willother programming or architecture models can walk around that limitation. So if shared resource model has failed for creating more java threading, maybe we can try shared nothing model.So fortunately I get one chance create one system need large scalability, the system need send out lots of notfication in very quick manner. So I decide go ahead with SEDA model for trial and leverage with my multiple-lane commonj pattern, current I can run the java application with maximum number around 600 threads if your java heap setting with 1.5 gigabytes in one machine.So how about the average memory consumption for one java thread is around 512 kilobytes (Ref: http://www.javacodegeeks.com/2011/04/erlang-vs-java-memory-architecture.html), so 600 threads almost you need 300M memory consumption (include java native and java heap). And if you system design is good, the 300M usage will not your burden acutally.By the way in windows you can’t create more then 1000 since windows can’t handle the threads very well, but you can create 1000 threads in linux if you leverage with NPTL. So many persons told you java couldn’t handle large concurrent job processings that wasn’t 100% true.Someone may ask how about thread itself lifecycle swap: ready – runnable – running – waiting. I would say java and latest OS already could handle them suprisingly effecient, and if you have mutliple-core cpu and turn on NUMA the whole performance will be enhanced more further. So it’s not your bottleneck at least from very beginning phase.Of course create thread and make thread to running stage are very heavy things, so please leverage with threadpool (jdk: executors)And you could ref : http://community.jboss.org/people/andy.song/blog/2011/02/22/performance-compare-between-kilim-and-multilane-commj-pattern for power of many java threads  ConclusionIn the future how will you answer the question “How many java threads should I create?”. I hope your answer will change to:1. if your archiecture was based on shared resource model then your thread number should be “Number of CPU core + 1″ with better throughput2. if your architecture was shared-nothing model (like SEDA, ACTOR) then you could create as many thread as your need.Reference: How many java threads should I create? from our JCG partner Andy Song at the song andy’s Stuff blog....
career-logo

How Employers Measure Passion in Software Engineering Candidates

Over the past few months I have had some exchanges with small company executives and hiring managers which have opened my eyes to what I consider a relatively new wrinkle in the software development hiring world. I have been recruiting software engineers for 14 years, and I don’t recall another time where I’ve observed this at the same level. Here are two examples.The first incident was related to a candidate (‘A’) resume that I submitted to a local start-up. A was well-qualified for the position based on the technical specifications the client gave me, and I anticipated that at worst a phone screen for A would be automatic. I even went as far as to share A’s interview availability. A day after hitting ‘send’, I received feedback that the hiring manager was not interested in an interview. A large part of the manager’s reasoning was related to the fact that A had taken a two year sabbatical to pursue a degree in a non-technical discipline and subsequently took a job in that field for a brief stint, before returning to the software world a few years ago. I clarified information about A to be sure that the manager had full understanding of the situation, and the verdict was upheld – no interview.My second anecdote involves another candidate (‘B’) that I presented for a position with a different client company. B was someone I would classify as a junior level candidate overall and probably ‘borderline qualified’ for the role. B had roughly the minimum amount of required experience with a few gaps, and I was not 100% confident that B would be invited in. B was brought in for an interview, performed about average on the technical portions, and shined interpersonally. As this company does not make a habit of hiring average engineers, I was at least a bit surprised when an offer was made. I was told that a contributing factor for making the offer was that B’s ‘extracurricular activities’ were, according to my client, indicative of someone that was going to be a great engineer (though B’s current skills were average). B’s potential wasn’t being assessed as if B were an entry level engineer with a solid academic background, but rather the potential was assessed based on B’s interest in software.There are obviously many other stories like these, and the link between them seems obvious. Software firms that are hiring engineers (smaller shops in particular) appear to be qualifying and quantifying a candidate’s passion with the same level of scrutiny that they use in trying to measure technical skills and culture fit. Historically, companies have reviewed resumes and conducted interviews to answer the question, ‘Can this candidate perform the task at hand?‘. For my purposes as a recruiter of engineers, the question can be oversimplified as ‘Can he/she code?’. It seems the trend is to follow that question with ‘Does he/she CARE about the job, the company, and the craft?’.If you lack passion for the industry, be advised that in future job interviews you may be judged on this quality. Whether you love coding or not, reading further will give you some insight. Engineer A is a cautionary tale, while B is someone the passionate will want to emulate. Let’s start with A.  I don’t want to be like A. How can I avoid appearing dispassionate on my resume?Candidate A never had a chance, and I’ll shoulder partial responsibility for that. A was rejected based solely on a resume and my accompanying notes, so theoretically A could be extremely passionate about software engineering without appearing to be so on paper. Applicants do take some potential risks by choosing to include irrelevant experience, education, or even hobbies on a resume, and I will often warn my candidates of items that could cause alarm. In this case, A’s inclusion of both job details and advanced degrees in another discipline were judged as a red flag that A might decide to again leave the software industry. A similar conclusion could have been reached if A had listed hobbies that evidenced a deep-rooted drive toward something other than engineering (say, studying for a certification in a trade).Another related mistake on resumes is an Objective section that does not reflect the job for which you are applying. I have witnessed candidates being rejected for interviews based on an objective, and the most common example is when a candidate seeking a dev job lists ‘technical lead’ or ‘manager’ in the objective. Typical feedback might sound like this: ‘Our job is a basic development position, and if she only wants to be in a leadership slot she would not be happy with the role’. Listing the type of job that you are passionate about is essential if you are going to include an objective. I prefer that candidates avoid an objective section to avoid this specific danger, as most job seekers are open to more than one possible hiring scenario.I want to be like B. What can I do to highlight my passion during my search?Since the search starts out with the resume, be sure to list all of the details about you that demonstrate your enthusiasm. This should include relevant education, professional experience, and hobbies or activities that pertain to engineering. When listing your professional experience, emphasize the elements of your job that were the most relevant to what you want to do. If you want to strictly do development, downplay the details of your sys admin or QA tasks (a mention could be helpful, just don’t dwell). When listing your academic credentials, recent grads should be sure to provide specifics on classes relevant to your job goals, and it may be in your best interest to remove degrees or advanced courses unrelated to engineering.In my experience, the most commonly overlooked resume details that would indicate passion are:participation in open source projects membership in user groups or meetups conference attendance public-speaking appearances engineering-related hobbies (e.g. Arduino, personal/organizational websites you built or maintain, tech blogging) technical volunteer/non-profit experienceIf any of the above are not on your resume, be sure to include them before your next job search.Assuming that you get the opportunity to interview, try to gracefully and tactfully include some details from the bulleted list above. Your reading habits and technologies you self-study are best mentioned in interviews, as they may seem less appropriate as resume material.Conclusion: Most candidates should feel free to at least mention interests that are not engineering related if the opportunity presents itself, as companies tend to like hiring employees that are not strictly one-dimensional. Just be sure not to overemphasize interests or activities that could be misinterpreted as future career goals. Passion alone won’t get you a job, but it can certainly make a difference in a manager’s decision on who to hire (candidate B) and who not to even interview (candidate A). Make sure you use your resume and interview time to show your passion.Reference: How Employers Measure Passion in Software Engineering Candidates (and how to express your passion in resumes and interviews from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
spring-logo

Spring JDBC Database connection pool setup

Setting up JDBC Database Connection Pool in Spring framework is easy for any Java application, just matter of changing few configuration in spring configuration file.If you are writing core java application and not running on any web or application server like Tomcat or Weblogic, Managing Database connection pool using Apache Commons DBCP and Commons Pool along-with Spring framework is nice choice but if you have luxury of having web server and managed J2EE Container, consider using Connection pool managed by J2EE server those are better option in terms of maintenance, flexibility and also help to prevent java.lang.OutofMemroyError:PermGen Space in tomcat by avoiding loading of JDBC driver in web-app class-loader, Also keeping JDBC connection pool information in Server makes it easy to change or include settings for JDBC over SSL. In this article we will see how to setup Database connection pool in spring framework using Apache commons DBCP and commons pool.jar This article is in continuation of my tutorials on spring framework and database like LDAP Authentication in J2EE with Spring Security and manage session using Spring security If you haven’t read those article than you may find them useful. Spring Example JDBC Database Connection Pool Spring framework provides convenient JdbcTemplate class for performing all Database related operation. if you are not using Hibernate than using Spring’s JdbcTemplate is good option. JdbcTemplate requires a DataSource which is javax.sql.DataSource implementation and you can get this directly using spring bean configuration or by using JNDI if you are using J2EE web server or application server for managing Connection Pool. See How to setup JDBC connection Pool in tomcat and Spring for JNDI based connection pooling for more details. In order to setup Data source you will require following configuration in your applicationContext.xml (spring configuration) file: //Datasource connection settings in Spring <bean id="springDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" > <property name="url" value="jdbc:oracle:thin:@localhost:1521:SPRING_TEST" /> <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" /> <property name="username" value="root" /> <property name="password" value="root" /> <property name="removeAbandoned" value="true" /> <property name="initialSize" value="20" /> <property name="maxActive" value="30" /> </bean>//Dao class configuration in spring <bean id="EmployeeDatabaseBean" class="com.test.EmployeeDAOImpl"> <property name="dataSource" ref="springDataSource"/> </bean> Below configuration of DBCP connection pool will create 20 database connection as initialSize is 20 and goes up to 30 Database connection if required as maxActive is 30. you can customize your database connection pool by using different properties provided by Apache DBCP library. Above example is creating connection pool with Oracle 11g database and we are using oracle.jdbc.driver.OracleDriver comes along with ojdbc6.jar or ojdbc6_g.jar , to learn more about how to connect Oracle database from Java program see the link. Java Code for using Connection pool in Spring Below is complete code example of DAO class which uses Spring JdbcTemplate to execute SELECT query against database using database connection from Connection pool. If you are not initializing Database connection pool on start-up than it may take a while when you execute your first query because it needs to create certain number of SQL connection and then it execute query but once connection pool is created subsequent queries will execute faster. //Code for DAO Class using Spring JdbcTemplate package com.test import javax.sql.DataSource; import org.log4j.Logger; import org.log4j.LoggerFactory; import org.springframework.jdbc.core.JdbcTemplate;/** * Java Program example to use DBCP connection pool with Spring framework * @author Javin Paul */ public class EmployeeDAOImpl implements EmployeeDAO {private Logger logger = LoggerFactory.getLogger(EmployeeDAOImpl.class); private JdbcTemplate jdbcTemplate;public void setDataSource(DataSource dataSource) { this.jdbcTemplate = new JdbcTemplate(dataSource); }@Override public boolean isEmployeeExists(String emp_id) { try { logger.debug("Checking Employee in EMP table using Spring Jdbc Template"); int number = this.jdbcTemplate.queryForInt("select count(*) from EMP where emp_id=?", emp_id); if (number > 0) { return true; } } catch (Exception exception) { exception.printStackTrace(); } return false; } }Dependency:1. you need to include oracle driver jar like ojdbc_6.jar in you classpath.  2. Apache DBCP and commons pool jar in application classpath.That’s all on how to configure JDBC Database connection pool in Spring framework. As I said its pretty easy using Apache DBCP library. Just matter of few configuration in spring applicationContext.xml and you are ready. If you want to configure JDBC Connection pool on tomcat (JNDI connection pool) and want to use in spring than see here.Reference: JDBC Database connection pool in Spring Framework – How to Setup Example from our JCG partner Javin Paul at the Javarevisited blog....
spring-logo

Spring Profiles in XML Config Files

My last blog was very simple as it covered my painless upgrade from Spring 3.0.x to Spring 3.1.x and I finished by mentioning that you can upgrade your Spring schemas to 3.1 to allow you to take advantage of Spring’s newest features. In today’s blog, I’m going to cover one of the coolest of these features: Spring profiles. But, before talking about how you implement Spring profiles, I thought that it would be a good idea to explore the problem that they’re solving, which is need to create different Spring configurations for different environments. This usually arises because your app needs to connect to several similar external resources during its development lifecycle and more often and not these ‘external resources’ are usually databases, although they could be JMS queues, web services, remote EJBs etc. The number of environments that your app has to work on before it goes lives usually depends upon a few of things, including your organizations business processes, the scale of the your app and it’s ‘importance’ (i.e. if you’re writing the tax collection system for your country’s revenue service then the testing process may be more rigorous than if you’re writing an eCommerce app for a local shop). Just so that you get the picture, below is a quick (and probably incomplete) list of all the different environments that came to mind:Local Developer Machine Development Test Machine The Test Teams Functional Test Machine The Integration Test Machine Clone Environment (A copy of live) LiveThis is not a new problem and it’s usually solved by creating a set of Spring XML and properties files for each environment. The XML files usually consist of a master file that imports other environment specific files. These are then coupled together at compile time to create different WAR or EAR files. This method has worked for years, but it does have a few problems:It’s non-standard. Each organization usually has its own way of tackling this problem, with no two methods being quite the same/ It’s difficult to implement leaving lots of room for errors. A different WAR/EAR file has to be created for and deployed on each environment taking time and effort, which could be better spent writing code.The differences in the Spring beans configurations can normally be divided into two. Firstly, there are environment specific properties such as URLs and database names. These are usually injected into Spring XML files using the PropertyPlaceholderConfigurer class and the associated ${} notation. <bean id='propertyConfigurer' class='org.springframework.beans.factory.config.PropertyPlaceholderConfigurer'> <property name='locations'> <list> <value>db.properties</value> </list> </property> </bean> Secondly, there are environment specific bean classes such as data sources, which usually differ depending upon how you’re connecting to a database. For example in development you may have: <bean id='dataSource' class='org.springframework.jdbc.datasource.DriverManagerDataSource'> <property name='driverClassName'> <value>${database.driver}</value> </property> <property name='url'> <value>${database.uri}</value> </property> <property name='username'> <value>${database.user}</value> </property> <property name='password'> <value>${database.password}</value> </property> </bean> …whilst in test or live you’ll simply write: <jee:jndi-lookup id='dataSource' jndi-name='jdbc/LiveDataSource'/> The Spring guidelines say that Spring profiles should only be used the second example above: bean specific classes and that you should continue to use PropertyPlaceholderConfigurer to initialize simple bean properties; however, you may want to use Spring profiles to inject an environment specific PropertyPlaceholderConfigurer in to your Spring context. Having said that, I’m going to break this convention in my sample code as I want the simplest code possible to demonstrate Spring profile’s features. Spring Profiles and XML Configuration In terms of XML configuration, Spring 3.1 introduces the new profile attribute to the beans element of the spring-beans schema: <beans profile='dev'> It’s this profile attribute that acts as a switch when enabling and disabling profiles in different environments. To explain all this further I’m going to use the simple idea that your application needs to load a person class, and that person class contains different properties depending upon the environment on which your program is running. The Person class is very trivial and looks something like this: public class Person {private final String firstName;private final String lastName;private final int age;public Person(String firstName, String lastName, int age) {this.firstName = firstName;this.lastName = lastName;this.age = age;}public String getFirstName() {return firstName;}public String getLastName() {return lastName;}public int getAge() {return age;}} …and is defined in the following XML configuration files: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd' profile='test1'><bean id='employee' class='profiles.Person'> <constructor-arg value='John' /> <constructor-arg value='Smith' /> <constructor-arg value='89' /> </bean> </beans> …and <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd' profile='test2'><bean id='employee' class='profiles.Person'> <constructor-arg value='Fred' /> <constructor-arg value='ButterWorth' /> <constructor-arg value='23' /> </bean> </beans> …called test-1-profile.xml and test-2-profile.xml respectively (remember these names, they’re important later on). As you can see, the only differences in configuration are the first name, last name and age properties. Unfortunately, it’s not enough simply to define your profiles, you have to tell Spring which profile you’re loading. This means that following old ‘standard’ code will now fail: @Test(expected = NoSuchBeanDefinitionException.class)public void testProfileNotActive() {// Ensure that properties from other tests aren't setSystem.setProperty('spring.profiles.active', '');ApplicationContext ctx = new ClassPathXmlApplicationContext('test-1-profile.xml');Person person = ctx.getBean(Person.class);String firstName = person.getFirstName();System.out.println(firstName);} Fortunately there are several ways of selecting your profile and to my mind the most useful is by using the ‘spring.profiles.active’ system property. For example, the following test will now pass: System.setProperty('spring.profiles.active', 'test1');ApplicationContext ctx = new ClassPathXmlApplicationContext('test-1-profile.xml');Person person = ctx.getBean(Person.class);String firstName = person.getFirstName();assertEquals('John', firstName); Obviously, you wouldn’t want to hard code things as I’ve done above and best practice usually means keeping the system properties definitions separate to your application. This gives you the option of using either a simple command line argument such as: -Dspring.profiles.active='test1' …or by adding # Setting a property value spring.profiles.active=test1 to Tomcat’s catalina.properties So, that’s all there is to it: you create your Spring XML profiles using the beans element profile attribute and switch on the profile you want to use by setting the spring.profiles.active system property to your profile’s name. Accessing Some Extra Flexibility However, that’s not the end of the story as the Guy’s at Spring has added a number of ways of programmatically loading and enabling profiles – should you choose to do so. @Testpublic void testProfileActive() {ClassPathXmlApplicationContext ctx = new ClassPathXmlApplicationContext('test-1-profile.xml');ConfigurableEnvironment env = ctx.getEnvironment();env.setActiveProfiles('test1');ctx.refresh();Person person = ctx.getBean('employee', Person.class);String firstName = person.getFirstName();assertEquals('John', firstName);} In the code above, I’ve used the new ConfigurableEnvironment class to activate the “test1” profile. @Testpublic void testProfileActiveUsingGenericXmlApplicationContextMultipleFilesSelectTest1() {GenericXmlApplicationContext ctx = new GenericXmlApplicationContext();ConfigurableEnvironment env = ctx.getEnvironment();env.setActiveProfiles('test1');ctx.load('*-profile.xml');ctx.refresh();Person person = ctx.getBean('employee', Person.class);String firstName = person.getFirstName();assertEquals('John', firstName);} However, The Guys At Spring now recommend that you use the GenericApplicationContext class instead of ClassPathXmlApplicationContext and FileSystemXmlApplicationContext as this provides additional flexibility. For example, in the code above, I’ve used GenericApplicationContext’s load(...) method to load a number of configuration files using a wild card: ctx.load('*-profile.xml'); Remember the filenames from earlier on? This will load both test-1-profile.xml and test-2-profile.xml. Profiles also include additional flexibility that allows you to activate more than one at a time. If you take a look at the code below, you can see that I’m activating both of my test1 and test2 profiles: @Testpublic void testMultipleProfilesActive() {GenericXmlApplicationContext ctx = new GenericXmlApplicationContext();ConfigurableEnvironment env = ctx.getEnvironment();env.setActiveProfiles('test1', 'test2');ctx.load('*-profile.xml');ctx.refresh();Person person = ctx.getBean('employee', Person.class);String firstName = person.getFirstName();assertEquals('Fred', firstName);} Beware, in the case of this example I have two beans with the same id of “employee”, and there’s no way of telling which one is valid and is supposed to take precedence. From my test, I guessing that the second one that’s read overwrites, or masks access to, the first. This is okay as you’re not supposed to have multiple beans with the same name – it’s just something to watch out for when activating multiple profiles. Finally, one of the better simplifications you can make is to use nested <beans/> elements. <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd'><beans profile='test1'> <bean id='employee1' class='profiles.Person'> <constructor-arg value='John' /> <constructor-arg value='Smith' /> <constructor-arg value='89' /> </bean> </beans><beans profile='test2'> <bean id='employee2' class='profiles.Person'> <constructor-arg value='Bert' /> <constructor-arg value='William' /> <constructor-arg value='32' /> </bean> </beans></beans> This takes away all the tedious mucking about with wild cards and loading multiple files, albeit at the expense of a minimal amount of flexibility. My next blog concludes my look at Spring profiles, by taking a look at the @Configuration annotation used in conjunction with the new @Profile annotation… so, more on that later. Reference: Using Spring Profiles in XML Config from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
enterprise-java-logo

8 Ways to improve your Java EE Production Support skills

Everybody involved in Java EE production support know this job can be difficult; 7/24 pager support, multiple incidents and bug fixes to deal with on a regular basis, pressure from the client and the management team to resolve production problems as fast as possible and prevent reoccurrences. On top of your day to day work, you also have to take care of multiple application deployments driven by multiple IT delivery teams. Sounds familiar? As hard as it can be, the reward for your hard work can be significant. You may have noticed from my past articles that I’m quite passionate about Java EE production support, root cause analysis and performance related problems. This post is all about sharing a few tips and work principles I have applied over the last 10+ years working with multiple Java EE production support teams onshore & offshore. This article will provide you with 8 ways to improve your production support skills which may help you better enjoy your IT support job and ultimately become a Java EE production support guru. #1 – Partner with your clients and delivery teams My first recommendation should not be a surprise to anybody. Regardless how good you are from a technical perspective, you will be unable to succeed as a great production support leader if you fail to partner with your clients and IT delivery teams. You have to realize that you are providing a service to your client who is the owner and master of the IT production environment. You are expected to ensure the availability of the critical Java EE production systems and address known and future problems to come. Stay away from damaging attitudes such as a false impression that you are the actual owner or getting frustrated at your client for lack of understanding of a problem etc. Your job is to get all the facts right and provide good recommendations to your clients so they can make the right decisions. Over time, a solid trust will be established between you and your client with great benefits & opportunities.Building a strong relationship with the IT delivery team is also very important. The delivery team, which includes IT architects, project managers and technical resources, is seen as the team of experts responsible to build and enhance the Java EE production environments via their established project delivery model. Over the years, I have seen several examples of friction between these 2 actors. The support team tends to be over critical of the delivery team work due to bad experience with failed deployments, surge of production incidents etc. I have also noticed examples where the delivery team tends to lack confidence in support team capabilities again due to bad experience in the context of failed deployments or lack of proper root cause analysis or technical knowledge etc. As a production support individual, you have to build your credibility and stay away from negative and non-professional attitude. Building credibility means hard work, proper gathering of facts, technical & root cause analysis, showing interest in learning a new solution etc. This will increase the trust with the delivery team and allow you to gain significant exposure and experience in long term. Ultimately, you will be able to work and provide consultation for both teams. Proper balance and professionalism between these 3 actors is key for any successful IT production environment. #2 – Every production incident is a learning opportunity One of the great things about Java EE production support is the multiple learning opportunities you are exposed to. You may have realized that after each production outage you achieved at least one the following goals:You gained new technical knowledge from a new problem type You increased your knowledge and experience on a known situation You increased your visibility and trust with your operation client You were able to share your existing knowledge with other team members allowing them to succeed and resolve the problemPlease note that it is also normal to face negative experiences from time to time. Again, you will also grow stronger from those and learn from your mistakes. Recurring problems, incidents or preventive work still offer you opportunities to gather more technical facts, pinpoint the root cause or come up with recommendations to develop a permanent resolution. The bottom line is that the more incidents you are involved with, the better. It is OK if you are not comfortable yet to take an active role in the incident recovery but please ensure that you are present so you can at least gain experience and knowledge from your other more experienced team members. #3 – Don’t fear change, embrace it One common problem I have noticed across the Java EE support teams is a fear factor around production platform changes such as project deployment, infrastructure or network level changes etc. Below are a few reasons of this common fear:For many support team members, application “change” is synonym of production “instability” Lack of understanding of the project itself or scope of changes will automatically translate as fear Low comfort level of executing the requested application or middleware changesSuch fear factor is often a symptom of gaps in the current release management process between the 3 main actors or production platform problems such as:Lack of proper knowledge transfer between the IT delivery and support teams Already unstable production environment prior to new project deployment Lack of deep technical knowledge of Java EE or middlewareFear can be a serious obstacle for your future growth and must be deal with seriously. My recommendation to you is that regardless of the existing gaps within your organization, simply embrace the changes but combine with proper due diligence such as asking for more KT, participating in project deployment strategy and risk assessments, performing code walkthroughs etc. This will allow you to eliminate that “fear” attitude, gain experience and credibility with your IT delivery team and client. This will also give you opportunities to build recommendations for future project deployments and infrastructure related improvements. Finally, if you feel that you are lacking technical knowledge to implement the changes, simply say it and ask for another more experienced team member to shadow your work. This approach will reduce your fear level and allow you to gain experience with minimal risk level. #4 – Learn how to read JVM Thread Dump and monitoring tools data I’m sure you have noticed from my past articles and case studies that I use JVM Thread Dump a lot. This is for a reason. Thread Dump analysis is one of the most important and valuable skill to acquire for any successful Java EE production support individual. I analyzed my first Thread Dump 10 years ago when troubleshooting a Weblogic 6 problem running on JDK 1.3. 10 years and hundreds of Thread Dump snapshots later, I’m still learning new problem patterns…The good part with JVM and Thread Dump is that you will always find new patterns to identity and understand. I can guarantee you that once you acquire this knowledge (along with JVM fundamentals), not only a lot of production incidents will be easier to pinpoint but also much more fun and self-rewarding. Given how easy, fast and non-intrusive it is these days to generate a JVM Thread Dump; there is simply no excuse not to learn this key troubleshooting technique.My other recommendation is to learn how to use existing monitoring tools and interpret the data. Java EE monitoring tools are highly valuable weapons for any production support individual involved in day to day support. Depending of the product purchased or free tools used by your IT client, they will provide you with a performance view of your Java EE applications, middleware (Weblogic, JBoss, WAS…) and the JVM itself. This historical data is also critical when performing root cause analysis following a major production outage. Proper knowledge and understanding of the data will allow you to understand the IT platform performance, capacity and give you opportunities to work with the IT capacity planning analysis & architect team which are accountable to ensure long term stability and scalability of the IT production environment. #5 – Learn how to write code and perform code walkthroughs My next recommendation is to improve your coding skills. One of the most important responsibilities as part of a Java EE production support team, on top of regular bug fixes, is to act as a “gate keeper” e.g. last line of defense before the implementation of a project. This risk assessment exercise involves not only project review, test results, performance test report etc. but also code walkthroughs. Unfortunately, this review is often not performed properly, if done at all. The goal of the exercise is to identify areas for improvement and detect potential harmful code defects for the production environment such as thread safe problems, lack of IO/Socket related timeouts etc. Your capability to perform such code assessment depends of your coding skills and overall knowledge of the Java EE design patterns & anti-patterns. Improving your coding skills can be done by following a few strategies as per below:Explore opportunities within your IT organization to perform delivery work Jump on any opportunity to review officially or unofficially existing or new project code Create personal Java EE development projects pertinent for your day to day work and long term career Join Java/Java EE Open Source projects & communities (Apache, JBoss, Spring…)#6 – Don’t pretend that you know everything about Java, JVM & Middleware Another common problem I noticed for many Java EE production support individuals is a skill ‘plateau’. This is especially problematic when working on static IT production environments with few changes and hardening improvements. In this context, you get used very quickly to your day to day work, technology used and known problems. You then become very comfortable with your tasks with a false impression of seniority. Then one day, your IT organization is faced with a re-org or you have to work for a new client. At this point you are shocked and struggling to overcome the new challenges. What happened?You reached a skill plateau within your small Java EE application list and middleware bubble You failed to invest time into yourself and outside your work IT bubble You failed to acknowledge your lack of deeper Java, Java EE & middleware knowledge e.g. false impression of knowing everything You failed to keep your eyes opened and explore the rest of the IT world and Java communityMy main recommendation to you is that when you feel over confident or over qualified in your current role, it is time to move on and take on new challenges. This could mean a different role within your existing support team, moving to a project delivery team for a certain time or completely switching job and / or IT client. Constantly seeking new challenges will lead to:Significant increase of knowledge due to a higher diversity of technologies such as JVM vendors (HotSpot, IBM JVM, Oracle JRockit…), middleware (Weblogic, JBoss, WAS…), databases, OS, infrastructure etc. Significant increase of knowledge due to a higher diversity of implementations and solutions (SOA, Web development / portals, middle-tier, legacy integration, mobile development etc.) Increased learning opportunities due to new types of production incidents Increased visibility within your IT organization and Java community Improved client skills and contacts Increased resistance to work under stress e.g. learn how to use stress and adrenaline at your advantage (typical boost you can get during a severe production outage)#7 – Share your knowledge with your team and the Java community Sharing your Java EE skills and production support experience is a great way to improve and maintain a strong relationship with your support team members. I also encourage you to participate and share your Java EE production problems with the Java community (Blogs, forums, Open Source groups etc.) since a lot of problems are common and I’m sure people can benefit from your experience. That being said, one approach that I follow myself and highly recommend is to schedule planned (ideally weekly) internal training sessions. The topic is typically chosen via a simple voting system and presented by different members, when possible. A good sharing mentality will naturally lead you to more research and reading, further increasing your skills in long term. #8 – Rise to the Challenge At this point you have acquired a solid knowledge foundation and key troubleshooting skills. You have been involved in many production incidents with good understanding of the root cause and resolution. You understand well your IT production environment and your client is starting to request your presence directly on critical incidents. You are also spending time every week to improve your coding skills and sharing with the Java community…but are you really up to the challenge? A true hero can be defined by an individual with great capabilities to rise to the challenge and lead the others to victory. Obviously you are not expected to save the world but you can still be the “hero of the day” by rising to the challenge and leading your support team to the resolution of critical production outages. A true successful and recognized Java EE production support person is not necessarily the strongest technical resource but one who has learned how to properly balance his technical knowledge and client skills along with a strong capability to rise to the challenge and take the lead when faced with difficult situations. I really hope that these tips can help you in your day to day Java EE production support. Please share your experience and tips on how to improve your Java EE production support skills. Reference: 8 Ways to improve your Java EE Production Support skills from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close