Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Enterprise SOAP Services in a Web 2.0 REST/JSON World

With the popularlity of JSON and other NoSQL data standard formats, the complexity and in some cases the plain verbosity of XML formats are being shunned. However, XML and the abilities and standards that have formed around it have become key to the enterprise and their business processes. However, the needs of their customers require that they start to support multiple formats. To this end, tooling frameworks like Axis2 have started to add support for taking WSDL based services and generate JSON responses.Enterprises need to live in this post SOAP world, but leverage the expertise and SOA infrastructures they have developed over the years. Axis2 is one way to do it, but it doesn’t provide monitoring and policy support of the box. Another alternative is the Turmeric SOA project from ebay. Natively out of the box one can take a WSDL like, the one provided by the STAR standard organization, and add support not only from SOAP 1.1/1.2, but also for REST style services serving XML, JSON, and other any other data format you would need to support.There is a catch though, Turmeric SOA was not designed with full SOAP and the W3C web service stack in mind. It uses WSDL to only describe the operations and the data formats supported by the service. So advanced features like WS-Security, WS-Reliable Messaging, and XML Encryption are not natively built into Turmeric. Depending on your needs you will need to work with pipeline Handlers and enhance the protocol processors to support some of the advance features. However, these are items that can be worked around, and it can interoperate with existing web services to act as a proxy.As an example, the STAR organisation provides a web service specification that has been implemented in the automative industry to provide transports for their specifications. Using a framework like Turmeric SOA existing applications can be made available to trading partners and consumer of the service in multiple formats. As an example, one could provide data in RESTful xml: <?xml version='1.0' encoding='UTF-8'?> <ns2:PullMessageResponse xmlns:ms="http://www.ebayopensource.org/turmeric/common/v1/types" xmlns:ns2="http://www.starstandards.org/webservices/2009/transport"> <ns2:payload> <ns2:content id="1"/> </ns2:payload> </ns2:PullMessageResponse>Or one can provide the same XML represented in a JSON format: { "jsonns.ns2": "http://www.starstandards.org/webservices/2009/transport", "jsonns.ns3": "http://www.starstandards.org/webservices/2009/transport/bindings", "jsonns.ms": "http://www.ebayopensource.org/turmeric/common/v1/types", "jsonns.xs": "http://www.w3.org/2001/XMLSchema", "jsonns.xsi": "http://www.w3.org/2001/XMLSchema-instance", "ns2.PullMessageResponse": { "ns2.payload": { "ns2.content": [ { "@id": "1" } ] } } }The above is generated from the same web service, but with just a header changed to indicate the data format that should be returned. No actual changes to business logic or the web service implementation code itself has to change. In Turmeric this is handled with the Data Binding Framework and its corresponding pipeline handlers. With Axis2 this is a message request configuration entry. Regardless of how it is done, it is important to be able to leverage existing services, but provide the data in a format that your consumers require.For those that are interested, I’ve created a sample STAR Web Service that can be used with Turmeric SOA to see how this works. Code is available at github.While Turmeric handles the basics of the SOAP protocol, advance items like using and storing information in the soap:header are not as easily supported. You can get to the information, but because the use of WSDL in Turmeric Services are there just to describe the data formats and messages, the underlying soap transport support is not necessarily leveraged to the full specification. Depending on your requirements, Axis2 may be better, but Turmeric SOA provides additional items like Service Metrics, Monitoring Console, Policy Administration, Rate Limiting through XACML 2.0 based policies, and much more. If you already have existing web services written the W3C way, but need to provide data in other formats, Turmeric can be used along side. It isn’t a one or the other proposition. Leverage the tools that provide you the greatest flexibility to provide the data to your consumers, with the least amount of effort.Reference: Enterprise SOAP Services in a Web 2.0 REST/JSON World from our JCG partner David Carver at the Intellectual Cramps blog....
java-logo

Using EasyMock or Mockito

I have been using EasyMock for most of time but recently I worked with a few people who were pretty much inclined to use Mockito. Not intending to use two frameworks for the same purpose in the same project I adopted Mockito. So for the last couple of months I have been using Mockito and here is my comparative analysis of the two.The people with whom I have work cite reasons of test readability for using Mockitio but I have a different opinion on the same. Suppose we have the following code that we intend to test : public class MyApp { MyService service; OtherService otherService;void operationOne() { service.operationOne(); }void operationTwo(String args) { String operationTwo = otherService.operationTwo(args); otherService.operationThree(operationTwo); }void operationThree() { service.operationOne(); otherService.operationThree("success"); } }class MyService { void operationOne() {} }class OtherService { public String operationTwo(String args) { return args; }public void operationThree(String operationTwo) {} } Now let me write a simple test case for this class using EasyMock and the using Mockito. public class MyAppEasyMockTest { MyApp app; MyService service; OtherService otherService;@Before public void initialize() { service = EasyMock.createMock(MyService.class); otherService = EasyMock.createMock(OtherService.class); app = new MyApp(); app.service = service; app.otherService = otherService; }@Test public void verifySimpleCall() { service.operationOne(); EasyMock.replay(service); app.operationOne(); EasyMock.verify(service); } } public class MyAppMockitoTest { MyApp app; MyService service; OtherService otherService;@Before public void initialize() { service = Mockito.mock(MyService.class); otherService = Mockito.mock(OtherService.class); app = new MyApp(); app.service = service; app.otherService = otherService; }@Test public void verifySimpleCall() { app.operationOne(); Mockito.verify(service).operationOne(); } } This is a really simple test and I must say the Mockito one is more readable . But according to the classic testing methodology the Mockito test is not complete. We have verified the call that we are looking for but if tomorrow I change the source code by adding one more call to service the test would not break. void operationOne() { service.operationOne(); service.someOtherOp(); } Now this makes me feel that the tests are not good enough. But thankfully Mockito gives the verifyNoMoreInteractions that can be used to complete the test. Now let me write a few more test for the MyApp class. public class MyAppEasyMockTest { @Test public void verifyMultipleCalls() { String args = "one"; EasyMock.expect(otherService.operationTwo(args)).andReturn(args); otherService.operationThree(args); EasyMock.replay(otherService); app.operationTwo(args); EasyMock.verify(otherService); }@Test(expected = RuntimeException.class) public void verifyException() { service.operationOne(); EasyMock.expectLastCall().andThrow(new RuntimeException()); EasyMock.replay(service); app.operationOne(); }@Test public void captureArguments() { Capture<String> captured = new Capture<String>(); service.operationOne(); otherService.operationThree(EasyMock.capture(captured)); EasyMock.replay(service, otherService); app.operationThree(); EasyMock.verify(service, otherService); assertTrue(captured.getValue().contains("success")); }}public class MyAppMockitoTest { @Test public void verifyMultipleCalls() { String args = "one"; Mockito.when(otherService.operationTwo(args)).thenReturn(args); app.operationTwo(args); Mockito.verify(otherService).operationTwo(args); Mockito.verify(otherService).operationThree(args); Mockito.verifyNoMoreInteractions(otherService); Mockito.verifyZeroInteractions(service); }@Test(expected = RuntimeException.class) public void verifyException() { Mockito.doThrow(new RuntimeException()).when(service).operationOne(); app.operationOne(); }@Test public void captureArguments() { app.operationThree(); ArgumentCaptor capturedArgs = ArgumentCaptor .forClass(String.class); Mockito.verify(service).operationOne(); Mockito.verify(otherService).operationThree(capturedArgs.capture()); assertTrue(capturedArgs.getValue().contains("success")); Mockito.verifyNoMoreInteractions(service, otherService); } } These are some practical scenarios of testing where we would like to assert arguments, Exceptions etc. If I look and compare the ones written using EasyMock with the ones using Mockito I tend to feel that both the tests are equal in readability, none of them do a better task.The large number of expect and return calls in EasyMock make the tests not readable and the verify statements of Mockito often compromise over test readility. As per the documentation of Mockito the verifyZeroInteractions, verifyNoMoreInteractions should not be used in every test that you write but if I leave them out of my tests then my tests are not good enough.Moreover in tests everything thing should be under the control of the developer i.e. how the interaction are happening and what interactions are happening. In EasyMock this aspect is more visible as the devloper must put down all of these interaction in his code but in Mockito, the framework takes care of all interactions and the developer is just concerned with their verification ( if any). But this can lead to testing scenarios where the developer is not under control of all interactions.There are few nice things that Mockito has like the JunitRunner that can be used to create Mocks of all the required dependencies. It is a nice way of removing some of the infrastructure code and EasyMock should also have one. @RunWith(MockitoJUnitRunner.class) public class MyAppMockitoTest { MyApp app; @Mock MyService service; @Mock OtherService otherService;@Before public void initialize() { app = new MyApp(); app.service = service; app.otherService = otherService; } } Conclusion:Since I have used both frameworks, I feel that except for simple test cases both EasyMock and Mockito lead to test cases that equal in readability. But EasyMock is better for the unit testing as it forces the developer to take control of things. Mockito due to its assumptions and considerations hides this control under the carpet and thus is not a good choice. But Mockito offers certaing things that are quite useful(eg. junitRunner, call chaining) and EasyMock should have one in its next release. Reference: using EasyMock or Mockito from our JCG partner Rahul Sharma at the The road so far… blog blog. ...
enterprise-java-logo

ADF : Dynamic View Object

Today I want to write about dynamic view object which allow me to change its data source (SQL query) and attributes at run time. I will use oracle.jbo.ApplicationModule :: createViewObjectFromQueryStmt method to do this issue. I will present how to do this step by step Create View Object and Application Module   1- Right click on Model project and choose New2- Choose from left pane “ADF Business Component” , then from list choose “View Object” and click “OK” button3- Enter “DynamicVO” in “Name” and choose “Sql Query” radio button and click “Next” button.4- Write in Select field “select * from dual” and click “Next” button until reach Window “Step 8 of 9″  5- Check “Add to Application Module” check box and click “Finish” button.Implement Changes in Application Module 1- Open application module “AppModule”, then open Java tab and check “Generate Application Module Class AppModuleImpl” check box2- Open AppModuleImpl.java Class and Add the below method for dynamic view object public void changeDynamicVoQuery(String sqlStatement) { ViewObject dynamicVO = this.findViewObject("DynamicVO1"); dynamicVO.remove(); dynamicVO = this.createViewObjectFromQueryStmt("DynamicVO1", sqlStatement); dynamicVO.executeQuery(); }3- Open “AppModule” then open Java tab and Add changeDynamicVoQuery method to Client InterfaceTest Business Component   1- Right click on AppModue in Application navigator and choose Run from drop down list.2- Right click on AppModule in left pane and choose Show from drop down lsit Write “Select * from Emp” in sqlStatement parameter Click on Execute button, The result will be Success .3- Click double click on DynamicVO1 in left pane, it will display the data of DynamicVO and display data which I entered “Select * from Emp” before not “Select * from dual” that was used in design time of view object.  To use dynamic view objects in ADF Faces, you should use ADF Dynamic Table or ADF Dynamic Form. You can download sample application from here Reference: ADF : Dynamic View Object from our JCG partner Mahmoud A. ElSayed at the Dive in Oracle blog....
software-development-2-logo

Bug Fixing – to Estimate, or not to Estimate, that is the question

According to Steve McConnell in Code Complete (data from 1975-1992) most bugs don’t take long to fix. About 85% of errors can be fixed in less than a few hours. Some more can be fixed in a few hours to a few days. But the rest take longer, sometimes much longer – as I talked about in an earlier post. Given all of these factors and uncertainty, how to you estimate a bug fix? Or should you bother? Block out some time for bug fixing Some teams don’t estimate bug fixes upfront. Instead they allocate a block of time, some kind of buffer for bug fixing as a regular part of the team’s work, especially if they are working in time boxes. Developers come back with an estimate only if it looks like the fix will require a substantial change – after they’ve dug into the code and found out that the fix isn’t going to be easy, that it may require a redesign or require changes to complex or critical code that needs careful review and testing. Use a rule of thumb placeholder for each bug fix Another approach is to use a rough rule of thumb, a standard place holder for every bug fix. Estimate ½ day of development work for each bug, for example. According to this post on Stack Overflow the ½ day suggestion comes from Jeff Sutherland, one of the inventors of Scrum. This place holder should work for most bugs. If it takes a developer more than ½ day to come up with a fix, then they probably need help and people need to know anyways. Pick a place holder and use it for a while. If it seems too small or too big, change it. Iterate. You will always have bugs to fix. You might get better at fixing them over time, or they might get harder to find and fix once you’ve got past the obvious ones. Or you could use the data earlier from Capers Jones on how long it takes to fix a bug by the type of bug. A day or half day works well on average, especially since most bugs are coding bugs (on average 3 hours) or data bugs (6.5 hours). Even design bugs on average only take little more than a day to resolve. Collect some data – and use it Steve McConnell, In Software Estimation: Demystifying the Black Art says that it’s always better to use data than to guess. He suggests collecting time data for as little as a few weeks or maybe a couple of months on how long on average it takes to fix a bug, and use this as a guide for estimating bug fixes going forward. If you have enough defect data, you can be smarter about how to use it. If you are tracking bugs in a bug database like Jira, and if programmers are tracking how much time they spend on fixing each bug for billing or time accounting purposes (which you can also do in Jira), then you can mine the bug database for similar bugs and see how long they took to fix – and maybe get some ideas on how to fix the bug that you are working on by reviewing what other people did on other bugs before you. You can group different bugs into buckets (by size – small, medium, large, x-large – or type) and then come up with an average estimate, and maybe even a best case, worst case and most likely for each type. Use Benchmarks For a maintenance team (a sustaining engineering or break/fix team responsible for software repairs only), you could use industry productivity benchmarks to project how many bugs your team can handle. Capers Jones in Estimating Software Costs says that the average programmer (in the US, in 2009), can fix 8-10 bugs per month (of course, if you’re an above-average programmer working in Canada in 2012, you’ll have to set these numbers much higher). Inexperienced programmers can be expected to fix 6 a month, while experienced developers using good tools can fix up to 20 per month. If you’re focusing on fixing security vulnerabilities reported by a pen tester or a scan, check out the remediation statistical data that Denim Group has started to collect, to get an idea on how long it might take to fix a SQL injection bug or an XSS vulnerability. So, do you estimate bug fixes, or not? Because you can’t estimate how long it will take to fix a bug until you’ve figured out what’s wrong, and most of the work in fixing a bug involves figuring out what’s wrong, it doesn’t make sense to try to do an in-depth estimate of how long it will take to fix each bug as they come up. Using simple historical data, a benchmark, or even a rough guess place holder as a rule-of-thumb all seem to work just as well. Whatever you do, do it in the simplest and most efficient way possible, don’t waste time trying to get it perfect – and realize that you won’t always be able to depend on it. Remember the 10x rule – some outlier bugs can take up to 10x as long to find and fix than an average bug. And some bugs can’t be found or fixed at all – or at least not with the information that you have today. When you’re wrong (and sometimes you’re going to be wrong), you can be really wrong, and even careful estimating isn’t going to help. So stick with a simple, efficient approach, and be prepared when you hit a hard problem, because it’s gonna happen. Reference: Bug Fixing – to Estimate, or not to Estimate, that is the question from our JCG partner Jim Bird at the Building Real Software blog....
apache-mahout-logo

Mahout and Scalding for poker collusion detection

When I’ve been reading a very bright book on Mahout, Mahout In Action (which is a great hands-in intro to machine learning, as well), one of the examples has caught my attention. Authors of the book where using well-known K-means clusterization algorithm for finding similar players on stackoverflow.com, where the criterion of similarity was the set of the authors of questions/answers the users were up-/downvoting. In a very simple words, K-means algorithm iteratively finds clusters of points/vectors, located close to each other, in a multidimensional space. Being applied to the problem of finding similars players in StackOverflow, we assume that every axis in the multi-dimensional space is a user, where the distance from zero is a sum of points, awarded to the questions/answers given by other players (those dimensions are also often called “features”, where the the distance is a “feature weight”). Obviously, the same approach can be applied to one of the most sophisticated problems in a massively-multiplayer online poker – collusion detection. We’re making a simple assumption that if two or more players have played too much games with each other (taking into account that any of the players could simply have been an active player, who played a lot of games with anyone), they might be in a collusion. We break a massive set of players into a small, tight clusters (preferably, with 2-8 players in each), using K-means clustering algorithm. In a basic implementation that we will go through further, every user is represented as a vector, where axises are other players that she has played with (and the weight of the feature is the number of games, played together). Stage 1. Building a dictionary As the first step, we need to build a dictionary/enumeration of all the players, involved in the subset of hand history that we analyze: // extract user ID from hand history record val userId = (playerHistory: PlayerHandHistory) => new Text(playerHistory.getUserId.toString)// Builds basic dixtionary (enumeration, in fact) of all the players, participated in the selected subset of hand // history records class Builder(args: Args) extends Job(args) {// input tap is an HTable with hand history entries: hand history id -> hand history record, serialized with ProtoBuf val input = new HBaseSource("hand", args("hbasehost"), 'handId, Array("d"), Array('blob)) // output tap - plain text file with player IDs val output = TextLine(args("output"))input .read .flatMap('blob -> 'player) { // every hand history record contains the list of players, participated in the hand blob: Array[Byte] => // at the first stage, we simply extract the list of IDs, and add it to the flat list HandHistory.parseFrom(blob).getPlayerList.map(userId) } .unique('player) // remove duplicate user IDs .project('player) // leave only 'player column from the tuple .write(output)} 1003 1004 1005 1006 1007 ...Stage 2. Adding indices to the dictionary Secondly, we map user IDs to position/index of a player in the vector. class Indexer(args: Args) extends Job(args) {val output = WritableSequenceFile(args("output"), classOf[Text], classOf[IntWritable], 'userId -> 'idx)TextLine(args("input")).read .map(('offset -> 'line) -> ('userId -> 'idx)) { // dictionary lines are read with indices from TextLine source // out of the box. For some reason, in my case, indices were multiplied by 5, so I have had to divide them tuple: (Int, String) => (new Text(tuple._2.toString) -> new IntWritable((tuple._1 / 5))) } .project(('userId -> 'idx)) // only userId -> index tuple is passed to the output .write(output)} 1003 0 1004 1 1005 2 1006 3 1007 4 ... Stage 3. Building vectors We build vectors that will be passed as an input to K-means clustering algorithm. As we noted above, every position in the vector corresponds to another player the player has played with: /** * K-means clustering algorithm requires the input to be represented as vectors. * In out case, the vector, itself, represents the player, where other users, the player has played with, are * vector axises/features (the weigh of the feature is a number of games, played together) * User: remeniuk */ class VectorBuilder(args: Args) extends Job(args) {import Dictionary._// initializes dictionary pipe val dictionary = TextLine(args("dictionary")) .read .map(('offset -> 'line) -> ('userId -> 'dictionaryIdx)) { tuple: (Int, String) => (tuple._2 -> tuple._1 / 5) } .project(('userId -> 'dictionaryIdx))val input = new HBaseSource("hand", args("hbasehost"), 'handId, Array("d"), Array('blob)) val output = WritableSequenceFile(args("output"), classOf[Text], classOf[VectorWritable], 'player1Id -> 'vector)input .read .flatMap('blob -> ('player1Id -> 'player2Id)) { //builds a flat list of pairs of users that player together blob: Array[Byte] => val playerList = HandsHistoryCoreInternalDomain.HandHistory.parseFrom(blob).getPlayerList.map(userId) playerList.flatMap { playerId => playerList.filterNot(_ == playerId).map(otherPlayerId => (playerId -> otherPlayerId.toString)) } } .joinWithSmaller('player2Id -> 'userId, dictionary) // joins the list of pairs of //user that played together with // the dictionary, so that the second member of the tuple (ID of the second //player) is replaced with th index //in the dictionary .groupBy('player1Id -> 'dictionaryIdx) { group => group.size // groups pairs of players, played together, counting the number of hands } .map(('player1Id, 'dictionaryIdx, 'size) ->('playerId, 'partialVector)) { tuple: (String, Int, Int) => val partialVector = new NamedVector( new SequentialAccessSparseVector(args("dictionarySize").toInt), tuple._1) // turns a tuple of two users // into a vector with one feature partialVector.set(tuple._2, tuple._3) (new Text(tuple._1), new VectorWritable(partialVector)) } .groupBy('player1Id) { // combines partial vectors into one vector that represents the number of hands, //played with other players group => group.reduce('partialVector -> 'vector) { (left: VectorWritable, right: VectorWritable) => new VectorWritable(left.get.plus(right.get)) } } .write(output)} 1003 {3:5.0,5:4.0,6:4.0,9:4.0} 1004 {2:4.0,4:4.0,8:4.0,37:4.0} 1005 {1:4.0,4:5.0,8:4.0,37:4.0} 1006 {0:5.0,5:4.0,6:4.0,9:4.0} 1007 {1:4.0,2:5.0,8:4.0,37:4.0} ...The entire workflow, required to vectorize the input: val conf = new Configuration conf.set("io.serializations", "org.apache.hadoop.io.serializer.JavaSerialization," + "org.apache.hadoop.io.serializer.WritableSerialization")// the path, where the vectors will be stored to val vectorsPath = new Path("job/vectors") // enumeration of all users involved in a selected subset of hand history records val dictionaryPath = new Path("job/dictionary") // text file with the dictionary size val dictionarySizePath = new Path("job/dictionary-size") // indexed dictionary (every user ID in the dictionary is mapped to an index, from 0) val indexedDictionaryPath = new Path("job/indexed-dictionary")println("Building dictionary...") // extracts IDs of all the users, participating in selected subset of hand history records Tool.main(Array(classOf[Dictionary.Builder].getName, "--hdfs", "--hbasehost", "localhost", "--output", dictionaryPath.toString)) // adds index to the dictionary Tool.main(Array(classOf[Dictionary.Indexer].getName, "--hdfs", "--input", dictionaryPath.toString, "--output", indexedDictionaryPath.toString)) // calculates dictionary size, and stores it to the FS Tool.main(Array(classOf[Dictionary.Size].getName, "--hdfs", "--input", dictionaryPath.toString, "--output", dictionarySizePath.toString))// reads dictionary size val fs = FileSystem.get(dictionaryPath.toUri, conf) val dictionarySize = new BufferedReader( new InputStreamReader( fs.open(new Path(dictionarySizePath, "part-00000")) )).readLine().toIntprintln("Vectorizing...") // builds vectors (player -> other players in the game) // IDs of other players (in the vectors) are replaces with indices, taken from dictionary Tool.main(Array(classOf[VectorBuilder].getName, "--hdfs", "--dictionary", dictionaryPath.toString, "--hbasehost", "localhost", "--output", vectorsPath.toString, "--dictionarySize", dictionarySize.toString))Stage 4. Generating n-random clusters Random clusters/centroids is an entry point for K-means algorithm: //randomly selected cluster the will be passed as an input to K-means val inputClustersPath = new Path('jobinput-clusters') val distanceMeasure = new EuclideanDistanceMeasure println('Making random seeds...') //build 30 initial random clusterscentroids RandomSeedGenerator.buildRandom(conf, vectorsPath, inputClustersPath, 30, distanceMeasure)Stage 5. Running K-means algorithms Every next iteration, K-means will find better centroids and clusters. As a result, we have 30 clusters of players that played with each other the most often: // clusterization results val outputClustersPath = new Path("job/output-clusters") // textual dump of clusterization results val dumpPath = "job/dump"println("Running K-means...") // runs K-means algorithm with up to 20 iterations, to find clusters of colluding players (assumption of collusion is // made on the basis of number hand player together with other player[s]) KMeansDriver.run(conf, vectorsPath, inputClustersPath, outputClustersPath, new CosineDistanceMeasure(), 0.01, 20, true, 0, false)println("Printing results...")// dumps clusters to a text file val clusterizationResult = finalClusterPath(conf, outputClustersPath, 20) val clusteredPoints = new Path(outputClustersPath, "clusteredPoints") val clusterDumper = new ClusterDumper(clusterizationResult, clusteredPoints) clusterDumper.setNumTopFeatures(10) clusterDumper.setOutputFile(dumpPath) clusterDumper.setTermDictionary(new Path(indexedDictionaryPath, "part-00000").toString, "sequencefile") clusterDumper.printClusters(null)Results Let’s go to “job/dump”, now – this file contains textual dumps of all clusters, generated by K-means. Here’s a small fragment of the file: VL-0{n=5 c=[1003:3.400, 1006:3.400, 1008:3.200, 1009:3.200, 1012:3.200] r=[1003:1.744, 1006:1.744, 1008:1.600, 1009:1.600, 1012:1.600]} Top Terms: 1006 => 3.4 1003 => 3.4 1012 => 3.2 1009 => 3.2 1008 => 3.2 VL-15{n=1 c=[1016:4.000, 1019:3.000, 1020:3.000, 1021:3.000, 1022:3.000, 1023:3.000, 1024:3.000, 1025:3.000] r=[]} Top Terms: 1016 => 4.0 1025 => 3.0 1024 => 3.0 1023 => 3.0 1022 => 3.0 1021 => 3.0 1020 => 3.0 1019 => 3.0 As we can see, 2 clusters of players have been detected: one with 8 players, that has played a lot of games with each other, and the second with 4 players. Reference: Poker collusion detection with Mahout and Scalding from our JCG partner Vasil Remeniuk at the Vasil Remeniuk blog blog....
jboss-hibernate-logo

Hibernate caches basics

Recently I have experimented with hibernate cache. In this post I would like share my experience and point out some of the details of Hibernate Second Level Cache. On the way I will direct you to some articles that helped me implement the cache. Let’s get started from the ground. Caching in hibernate Caching functionality is designed to reduces the amount of necessary database access. When the objects are cached they resides in memory. You have the flexibility to limit the usage of memory and store the items in disk storage.The implementation will depend on the underlying cache manager. There are various flavors of caching available, but is better to cache non-transactional and read-only data. Hibernate provides 3 types of caching. 1. Session Cache The session cache caches object within the current session. It is enabled by default in Hibernate. Read more about Session Cache . Objects in the session cache resides in the same memory location. 2. Second Level Cache The second level cache is responsible for caching objects across sessions. When this is turned on, objects will be first searched in cache and if they are not found, a database query will be fired. Read here on how to implement Second Level Cache. Second level cache will be used when the objects are loaded using their primary key. This includes fetching of associations. In case of second level cache the objects are constructed and hence all of them will reside in different memory locations. 3. Query Cache Query Cache is used to cache the results of a query. Read here on how to implement query cache.When the query cache is turned on, the results of the query are stored against the combination query and parameters. Every time the query is fired the cache manager checks for the combination of parameters and query. If the results are found in the cache they are returned other wise a database transaction is initiated. As you can see, it is not a good idea to cache a query if it has number of parameters or a single parameter can take number of values. For each of this combination the results are stored in the memory. This can lead to extensive memory usage. Finally, here is a list of good articles written on this topic, 1. Speed Up Your Hibernate Applications with Second-Level Caching 2. Hibernate: Truly Understanding the Second-Level and Query Caches 3. EhCache Integration with Spring and Hibernate. Step by Step Tutorial 4. Configuring Ehcache with hibernate Reference: All about Hibernate Second Level Cache from our JCG partner Manu PK at the The Object Oriented Life blog....
java-logo

Java threads: How many should I create

Introduction“How many threads should I create?”. Many years before one of my friends asked me the question, then I gave him the answer follow the guideline with ” Number of CPU core + 1″. Most of you will be nodding when you are reading here. Unfortunately all of us are wrong at that point.Right now I would give the answer with if your archiecture was based on shared resource model then your thread number should be “Number of CPU core + 1″ with better throughput, but if your architecture was shared-nothing model (like SEDA, ACTOR) then you could create as many thread as your need. Walk ThroughSo here came one question why so many eldership continuely gave us the guideline with “Number of Cpu core + 1″, because they told us the context switching of thread was heavy and would block your system scalability. But noboday noticed the programming or architecture model they were under. So if you read carefully you would find most of them described the pragramming or architecture model were based on shared resource model.Give you several examples:1. Socket programming – socket layer was shared by many requests, so you need context switch between every requests.2. Information provider system – most customer will contiuely access the same requestetc…So they would meet the multiple requests access the same resource situation so system would require add lock to that resource since consistency requirement of their system. Lock contention would come into play so the context swich of multiple threading would be very heavy.After I find this interesting thing, I consider willother programming or architecture models can walk around that limitation. So if shared resource model has failed for creating more java threading, maybe we can try shared nothing model.So fortunately I get one chance create one system need large scalability, the system need send out lots of notfication in very quick manner. So I decide go ahead with SEDA model for trial and leverage with my multiple-lane commonj pattern, current I can run the java application with maximum number around 600 threads if your java heap setting with 1.5 gigabytes in one machine.So how about the average memory consumption for one java thread is around 512 kilobytes (Ref: http://www.javacodegeeks.com/2011/04/erlang-vs-java-memory-architecture.html), so 600 threads almost you need 300M memory consumption (include java native and java heap). And if you system design is good, the 300M usage will not your burden acutally.By the way in windows you can’t create more then 1000 since windows can’t handle the threads very well, but you can create 1000 threads in linux if you leverage with NPTL. So many persons told you java couldn’t handle large concurrent job processings that wasn’t 100% true.Someone may ask how about thread itself lifecycle swap: ready – runnable – running – waiting. I would say java and latest OS already could handle them suprisingly effecient, and if you have mutliple-core cpu and turn on NUMA the whole performance will be enhanced more further. So it’s not your bottleneck at least from very beginning phase.Of course create thread and make thread to running stage are very heavy things, so please leverage with threadpool (jdk: executors)And you could ref : http://community.jboss.org/people/andy.song/blog/2011/02/22/performance-compare-between-kilim-and-multilane-commj-pattern for power of many java threads  ConclusionIn the future how will you answer the question “How many java threads should I create?”. I hope your answer will change to:1. if your archiecture was based on shared resource model then your thread number should be “Number of CPU core + 1″ with better throughput2. if your architecture was shared-nothing model (like SEDA, ACTOR) then you could create as many thread as your need.Reference: How many java threads should I create? from our JCG partner Andy Song at the song andy’s Stuff blog....
career-logo

How Employers Measure Passion in Software Engineering Candidates

Over the past few months I have had some exchanges with small company executives and hiring managers which have opened my eyes to what I consider a relatively new wrinkle in the software development hiring world. I have been recruiting software engineers for 14 years, and I don’t recall another time where I’ve observed this at the same level. Here are two examples.The first incident was related to a candidate (‘A’) resume that I submitted to a local start-up. A was well-qualified for the position based on the technical specifications the client gave me, and I anticipated that at worst a phone screen for A would be automatic. I even went as far as to share A’s interview availability. A day after hitting ‘send’, I received feedback that the hiring manager was not interested in an interview. A large part of the manager’s reasoning was related to the fact that A had taken a two year sabbatical to pursue a degree in a non-technical discipline and subsequently took a job in that field for a brief stint, before returning to the software world a few years ago. I clarified information about A to be sure that the manager had full understanding of the situation, and the verdict was upheld – no interview.My second anecdote involves another candidate (‘B’) that I presented for a position with a different client company. B was someone I would classify as a junior level candidate overall and probably ‘borderline qualified’ for the role. B had roughly the minimum amount of required experience with a few gaps, and I was not 100% confident that B would be invited in. B was brought in for an interview, performed about average on the technical portions, and shined interpersonally. As this company does not make a habit of hiring average engineers, I was at least a bit surprised when an offer was made. I was told that a contributing factor for making the offer was that B’s ‘extracurricular activities’ were, according to my client, indicative of someone that was going to be a great engineer (though B’s current skills were average). B’s potential wasn’t being assessed as if B were an entry level engineer with a solid academic background, but rather the potential was assessed based on B’s interest in software.There are obviously many other stories like these, and the link between them seems obvious. Software firms that are hiring engineers (smaller shops in particular) appear to be qualifying and quantifying a candidate’s passion with the same level of scrutiny that they use in trying to measure technical skills and culture fit. Historically, companies have reviewed resumes and conducted interviews to answer the question, ‘Can this candidate perform the task at hand?‘. For my purposes as a recruiter of engineers, the question can be oversimplified as ‘Can he/she code?’. It seems the trend is to follow that question with ‘Does he/she CARE about the job, the company, and the craft?’.If you lack passion for the industry, be advised that in future job interviews you may be judged on this quality. Whether you love coding or not, reading further will give you some insight. Engineer A is a cautionary tale, while B is someone the passionate will want to emulate. Let’s start with A.  I don’t want to be like A. How can I avoid appearing dispassionate on my resume?Candidate A never had a chance, and I’ll shoulder partial responsibility for that. A was rejected based solely on a resume and my accompanying notes, so theoretically A could be extremely passionate about software engineering without appearing to be so on paper. Applicants do take some potential risks by choosing to include irrelevant experience, education, or even hobbies on a resume, and I will often warn my candidates of items that could cause alarm. In this case, A’s inclusion of both job details and advanced degrees in another discipline were judged as a red flag that A might decide to again leave the software industry. A similar conclusion could have been reached if A had listed hobbies that evidenced a deep-rooted drive toward something other than engineering (say, studying for a certification in a trade).Another related mistake on resumes is an Objective section that does not reflect the job for which you are applying. I have witnessed candidates being rejected for interviews based on an objective, and the most common example is when a candidate seeking a dev job lists ‘technical lead’ or ‘manager’ in the objective. Typical feedback might sound like this: ‘Our job is a basic development position, and if she only wants to be in a leadership slot she would not be happy with the role’. Listing the type of job that you are passionate about is essential if you are going to include an objective. I prefer that candidates avoid an objective section to avoid this specific danger, as most job seekers are open to more than one possible hiring scenario.I want to be like B. What can I do to highlight my passion during my search?Since the search starts out with the resume, be sure to list all of the details about you that demonstrate your enthusiasm. This should include relevant education, professional experience, and hobbies or activities that pertain to engineering. When listing your professional experience, emphasize the elements of your job that were the most relevant to what you want to do. If you want to strictly do development, downplay the details of your sys admin or QA tasks (a mention could be helpful, just don’t dwell). When listing your academic credentials, recent grads should be sure to provide specifics on classes relevant to your job goals, and it may be in your best interest to remove degrees or advanced courses unrelated to engineering.In my experience, the most commonly overlooked resume details that would indicate passion are:participation in open source projects membership in user groups or meetups conference attendance public-speaking appearances engineering-related hobbies (e.g. Arduino, personal/organizational websites you built or maintain, tech blogging) technical volunteer/non-profit experienceIf any of the above are not on your resume, be sure to include them before your next job search.Assuming that you get the opportunity to interview, try to gracefully and tactfully include some details from the bulleted list above. Your reading habits and technologies you self-study are best mentioned in interviews, as they may seem less appropriate as resume material.Conclusion: Most candidates should feel free to at least mention interests that are not engineering related if the opportunity presents itself, as companies tend to like hiring employees that are not strictly one-dimensional. Just be sure not to overemphasize interests or activities that could be misinterpreted as future career goals. Passion alone won’t get you a job, but it can certainly make a difference in a manager’s decision on who to hire (candidate B) and who not to even interview (candidate A). Make sure you use your resume and interview time to show your passion.Reference: How Employers Measure Passion in Software Engineering Candidates (and how to express your passion in resumes and interviews from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
spring-logo

Spring JDBC Database connection pool setup

Setting up JDBC Database Connection Pool in Spring framework is easy for any Java application, just matter of changing few configuration in spring configuration file.If you are writing core java application and not running on any web or application server like Tomcat or Weblogic, Managing Database connection pool using Apache Commons DBCP and Commons Pool along-with Spring framework is nice choice but if you have luxury of having web server and managed J2EE Container, consider using Connection pool managed by J2EE server those are better option in terms of maintenance, flexibility and also help to prevent java.lang.OutofMemroyError:PermGen Space in tomcat by avoiding loading of JDBC driver in web-app class-loader, Also keeping JDBC connection pool information in Server makes it easy to change or include settings for JDBC over SSL. In this article we will see how to setup Database connection pool in spring framework using Apache commons DBCP and commons pool.jar This article is in continuation of my tutorials on spring framework and database like LDAP Authentication in J2EE with Spring Security and manage session using Spring security If you haven’t read those article than you may find them useful. Spring Example JDBC Database Connection Pool Spring framework provides convenient JdbcTemplate class for performing all Database related operation. if you are not using Hibernate than using Spring’s JdbcTemplate is good option. JdbcTemplate requires a DataSource which is javax.sql.DataSource implementation and you can get this directly using spring bean configuration or by using JNDI if you are using J2EE web server or application server for managing Connection Pool. See How to setup JDBC connection Pool in tomcat and Spring for JNDI based connection pooling for more details. In order to setup Data source you will require following configuration in your applicationContext.xml (spring configuration) file: //Datasource connection settings in Spring <bean id="springDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" > <property name="url" value="jdbc:oracle:thin:@localhost:1521:SPRING_TEST" /> <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" /> <property name="username" value="root" /> <property name="password" value="root" /> <property name="removeAbandoned" value="true" /> <property name="initialSize" value="20" /> <property name="maxActive" value="30" /> </bean>//Dao class configuration in spring <bean id="EmployeeDatabaseBean" class="com.test.EmployeeDAOImpl"> <property name="dataSource" ref="springDataSource"/> </bean> Below configuration of DBCP connection pool will create 20 database connection as initialSize is 20 and goes up to 30 Database connection if required as maxActive is 30. you can customize your database connection pool by using different properties provided by Apache DBCP library. Above example is creating connection pool with Oracle 11g database and we are using oracle.jdbc.driver.OracleDriver comes along with ojdbc6.jar or ojdbc6_g.jar , to learn more about how to connect Oracle database from Java program see the link. Java Code for using Connection pool in Spring Below is complete code example of DAO class which uses Spring JdbcTemplate to execute SELECT query against database using database connection from Connection pool. If you are not initializing Database connection pool on start-up than it may take a while when you execute your first query because it needs to create certain number of SQL connection and then it execute query but once connection pool is created subsequent queries will execute faster. //Code for DAO Class using Spring JdbcTemplate package com.test import javax.sql.DataSource; import org.log4j.Logger; import org.log4j.LoggerFactory; import org.springframework.jdbc.core.JdbcTemplate;/** * Java Program example to use DBCP connection pool with Spring framework * @author Javin Paul */ public class EmployeeDAOImpl implements EmployeeDAO {private Logger logger = LoggerFactory.getLogger(EmployeeDAOImpl.class); private JdbcTemplate jdbcTemplate;public void setDataSource(DataSource dataSource) { this.jdbcTemplate = new JdbcTemplate(dataSource); }@Override public boolean isEmployeeExists(String emp_id) { try { logger.debug("Checking Employee in EMP table using Spring Jdbc Template"); int number = this.jdbcTemplate.queryForInt("select count(*) from EMP where emp_id=?", emp_id); if (number > 0) { return true; } } catch (Exception exception) { exception.printStackTrace(); } return false; } }Dependency:1. you need to include oracle driver jar like ojdbc_6.jar in you classpath.  2. Apache DBCP and commons pool jar in application classpath.That’s all on how to configure JDBC Database connection pool in Spring framework. As I said its pretty easy using Apache DBCP library. Just matter of few configuration in spring applicationContext.xml and you are ready. If you want to configure JDBC Connection pool on tomcat (JNDI connection pool) and want to use in spring than see here.Reference: JDBC Database connection pool in Spring Framework – How to Setup Example from our JCG partner Javin Paul at the Javarevisited blog....
spring-logo

Spring Profiles in XML Config Files

My last blog was very simple as it covered my painless upgrade from Spring 3.0.x to Spring 3.1.x and I finished by mentioning that you can upgrade your Spring schemas to 3.1 to allow you to take advantage of Spring’s newest features. In today’s blog, I’m going to cover one of the coolest of these features: Spring profiles. But, before talking about how you implement Spring profiles, I thought that it would be a good idea to explore the problem that they’re solving, which is need to create different Spring configurations for different environments. This usually arises because your app needs to connect to several similar external resources during its development lifecycle and more often and not these ‘external resources’ are usually databases, although they could be JMS queues, web services, remote EJBs etc. The number of environments that your app has to work on before it goes lives usually depends upon a few of things, including your organizations business processes, the scale of the your app and it’s ‘importance’ (i.e. if you’re writing the tax collection system for your country’s revenue service then the testing process may be more rigorous than if you’re writing an eCommerce app for a local shop). Just so that you get the picture, below is a quick (and probably incomplete) list of all the different environments that came to mind:Local Developer Machine Development Test Machine The Test Teams Functional Test Machine The Integration Test Machine Clone Environment (A copy of live) LiveThis is not a new problem and it’s usually solved by creating a set of Spring XML and properties files for each environment. The XML files usually consist of a master file that imports other environment specific files. These are then coupled together at compile time to create different WAR or EAR files. This method has worked for years, but it does have a few problems:It’s non-standard. Each organization usually has its own way of tackling this problem, with no two methods being quite the same/ It’s difficult to implement leaving lots of room for errors. A different WAR/EAR file has to be created for and deployed on each environment taking time and effort, which could be better spent writing code.The differences in the Spring beans configurations can normally be divided into two. Firstly, there are environment specific properties such as URLs and database names. These are usually injected into Spring XML files using the PropertyPlaceholderConfigurer class and the associated ${} notation. <bean id='propertyConfigurer' class='org.springframework.beans.factory.config.PropertyPlaceholderConfigurer'> <property name='locations'> <list> <value>db.properties</value> </list> </property> </bean> Secondly, there are environment specific bean classes such as data sources, which usually differ depending upon how you’re connecting to a database. For example in development you may have: <bean id='dataSource' class='org.springframework.jdbc.datasource.DriverManagerDataSource'> <property name='driverClassName'> <value>${database.driver}</value> </property> <property name='url'> <value>${database.uri}</value> </property> <property name='username'> <value>${database.user}</value> </property> <property name='password'> <value>${database.password}</value> </property> </bean> …whilst in test or live you’ll simply write: <jee:jndi-lookup id='dataSource' jndi-name='jdbc/LiveDataSource'/> The Spring guidelines say that Spring profiles should only be used the second example above: bean specific classes and that you should continue to use PropertyPlaceholderConfigurer to initialize simple bean properties; however, you may want to use Spring profiles to inject an environment specific PropertyPlaceholderConfigurer in to your Spring context. Having said that, I’m going to break this convention in my sample code as I want the simplest code possible to demonstrate Spring profile’s features. Spring Profiles and XML Configuration In terms of XML configuration, Spring 3.1 introduces the new profile attribute to the beans element of the spring-beans schema: <beans profile='dev'> It’s this profile attribute that acts as a switch when enabling and disabling profiles in different environments. To explain all this further I’m going to use the simple idea that your application needs to load a person class, and that person class contains different properties depending upon the environment on which your program is running. The Person class is very trivial and looks something like this: public class Person {private final String firstName;private final String lastName;private final int age;public Person(String firstName, String lastName, int age) {this.firstName = firstName;this.lastName = lastName;this.age = age;}public String getFirstName() {return firstName;}public String getLastName() {return lastName;}public int getAge() {return age;}} …and is defined in the following XML configuration files: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd' profile='test1'><bean id='employee' class='profiles.Person'> <constructor-arg value='John' /> <constructor-arg value='Smith' /> <constructor-arg value='89' /> </bean> </beans> …and <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd' profile='test2'><bean id='employee' class='profiles.Person'> <constructor-arg value='Fred' /> <constructor-arg value='ButterWorth' /> <constructor-arg value='23' /> </bean> </beans> …called test-1-profile.xml and test-2-profile.xml respectively (remember these names, they’re important later on). As you can see, the only differences in configuration are the first name, last name and age properties. Unfortunately, it’s not enough simply to define your profiles, you have to tell Spring which profile you’re loading. This means that following old ‘standard’ code will now fail: @Test(expected = NoSuchBeanDefinitionException.class)public void testProfileNotActive() {// Ensure that properties from other tests aren't setSystem.setProperty('spring.profiles.active', '');ApplicationContext ctx = new ClassPathXmlApplicationContext('test-1-profile.xml');Person person = ctx.getBean(Person.class);String firstName = person.getFirstName();System.out.println(firstName);} Fortunately there are several ways of selecting your profile and to my mind the most useful is by using the ‘spring.profiles.active’ system property. For example, the following test will now pass: System.setProperty('spring.profiles.active', 'test1');ApplicationContext ctx = new ClassPathXmlApplicationContext('test-1-profile.xml');Person person = ctx.getBean(Person.class);String firstName = person.getFirstName();assertEquals('John', firstName); Obviously, you wouldn’t want to hard code things as I’ve done above and best practice usually means keeping the system properties definitions separate to your application. This gives you the option of using either a simple command line argument such as: -Dspring.profiles.active='test1' …or by adding # Setting a property value spring.profiles.active=test1 to Tomcat’s catalina.properties So, that’s all there is to it: you create your Spring XML profiles using the beans element profile attribute and switch on the profile you want to use by setting the spring.profiles.active system property to your profile’s name. Accessing Some Extra Flexibility However, that’s not the end of the story as the Guy’s at Spring has added a number of ways of programmatically loading and enabling profiles – should you choose to do so. @Testpublic void testProfileActive() {ClassPathXmlApplicationContext ctx = new ClassPathXmlApplicationContext('test-1-profile.xml');ConfigurableEnvironment env = ctx.getEnvironment();env.setActiveProfiles('test1');ctx.refresh();Person person = ctx.getBean('employee', Person.class);String firstName = person.getFirstName();assertEquals('John', firstName);} In the code above, I’ve used the new ConfigurableEnvironment class to activate the “test1” profile. @Testpublic void testProfileActiveUsingGenericXmlApplicationContextMultipleFilesSelectTest1() {GenericXmlApplicationContext ctx = new GenericXmlApplicationContext();ConfigurableEnvironment env = ctx.getEnvironment();env.setActiveProfiles('test1');ctx.load('*-profile.xml');ctx.refresh();Person person = ctx.getBean('employee', Person.class);String firstName = person.getFirstName();assertEquals('John', firstName);} However, The Guys At Spring now recommend that you use the GenericApplicationContext class instead of ClassPathXmlApplicationContext and FileSystemXmlApplicationContext as this provides additional flexibility. For example, in the code above, I’ve used GenericApplicationContext’s load(...) method to load a number of configuration files using a wild card: ctx.load('*-profile.xml'); Remember the filenames from earlier on? This will load both test-1-profile.xml and test-2-profile.xml. Profiles also include additional flexibility that allows you to activate more than one at a time. If you take a look at the code below, you can see that I’m activating both of my test1 and test2 profiles: @Testpublic void testMultipleProfilesActive() {GenericXmlApplicationContext ctx = new GenericXmlApplicationContext();ConfigurableEnvironment env = ctx.getEnvironment();env.setActiveProfiles('test1', 'test2');ctx.load('*-profile.xml');ctx.refresh();Person person = ctx.getBean('employee', Person.class);String firstName = person.getFirstName();assertEquals('Fred', firstName);} Beware, in the case of this example I have two beans with the same id of “employee”, and there’s no way of telling which one is valid and is supposed to take precedence. From my test, I guessing that the second one that’s read overwrites, or masks access to, the first. This is okay as you’re not supposed to have multiple beans with the same name – it’s just something to watch out for when activating multiple profiles. Finally, one of the better simplifications you can make is to use nested <beans/> elements. <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd'><beans profile='test1'> <bean id='employee1' class='profiles.Person'> <constructor-arg value='John' /> <constructor-arg value='Smith' /> <constructor-arg value='89' /> </bean> </beans><beans profile='test2'> <bean id='employee2' class='profiles.Person'> <constructor-arg value='Bert' /> <constructor-arg value='William' /> <constructor-arg value='32' /> </bean> </beans></beans> This takes away all the tedious mucking about with wild cards and loading multiple files, albeit at the expense of a minimal amount of flexibility. My next blog concludes my look at Spring profiles, by taking a look at the @Configuration annotation used in conjunction with the new @Profile annotation… so, more on that later. Reference: Using Spring Profiles in XML Config from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books