Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

mongodb-logo

MongoDB performance testing

So, this morning I was hacking around in the mongo shell. I had come up with three different ways to aggregate the data I wanted, but wasn’t sure about which one I should subsequently port to code to use within my application. So how would I decide on which method to implement? Well, lets just chose the one that performs the best. Ok, how do I do that? Hmmm. I could download and install some of the tools out there, or I could just wrap the shell code in a function and add some timings. OR, I could use the same tool that I use to performance test everything else; JMeter. To me it was a no brainer. So how do we do it? There is a full tutorial here. Simply put, you need to do the following:Create a Sampler class. Create a BeanInfo class. Create a properties file. Bundle up into a jar and drop into the apache-jmeter-X.X\lib\ext folder Update search_paths=../lib/ext/mongodb.jar in jmeter.properties if you place the jar anywhere else.How I did it I tend to have a scratch pad project set up in my IDE, so I decided just to go with that. Just to be on the safe side, I imported all the dependencies from:apache-jmeter-X.X\lib apache-jmeter-X.X\lib\ext apache-jmeter-X.X\lib\junitI then created the two class and the properties file. I then exported the jar to apache-jmeter-X.X\lib\ext, and fired up jmeter. Go throunullgh the normal steps to set the test plan up:Right click Test Plan and add a Thread Group. Right click the Thread Group and add a Sampler, in this case a MongoDB Script Sampler. Add your script to the textarea; db.YOUR_COLLECTION_NAME.insert({“jan” : “thinks he is great”}) Run the testHappy days. You can then use JMeter as you would for any other sampler. Future enhancements This is just a hack that took me 37 minutes to get running, plus 24 minutes if you include this post. This can certainly be extended to allow you to enter the replicaset config details for instance and to pull the creation of the connection out so we’re not initiating this each time run a test. Reference: Performance testing MongoDB from our JCG partner Jan Ettles at the Exceptionally exceptional exceptions blog....
android-logo

Android Emulator with Nexus One skin

I know it doesn’t really increase your value during development. It’s just for fun but those of you that like it, here are some links that point to Nexus One skins that can be used on your local dev emulator.http://www.tughi.com/2010/01/19/nexus-one-skin/ http://timhoeck.com/2010/01/16/nexus-one-emulator-skin/Update: The Nexus S skin is available at here. Installation is easy. Just extract it to <android-sdk-install-root>/platforms/<your-platform>/skins and then use it. The result is pretty nice :)Update What I forgot to mention. If you have problems with the size of the emulator that is going to be started, you may want to add a -scale to your emulator run arguments. In Eclipse you add it under your Run Configurations:Reference: Tune your emulator with a pretty Nexus One skin! from our NGC partner Juri Strumpflohner at the Juri Strumpflohner’s TechBlog blog....
software-development-2-logo

Statement driven development

Today most methodologies used model oriented approach. It can be domain-driven or reverse-engineering, one common point is that they start from a static definition of a model. For domain-driven, it is the domain issued by the development language itself. For the reverse-engineering it starts from a static model structure definition ex: a xsd, wsdl, database schema.  Statement driven development focuses on the actions the developer can run on some models and not the models themselves.  Example: a select statement fetching all the group members a user with group-administrator-role can manage.  Here the statement can be materialized into a search query with sql as implementation. The important thing for the developer is the query and its I/O not really the underneath model and … not the decorative technology.Why Statement-Driven-DevelopmentCouple of articles and limitations make me think that statement driven approach can be a flexible and fast alternative for developpers. Usually after defining an abstract model such as ORM, the user have to write his UC statements. This means that there is still this second step to perform.The model is sometimes/often overkill. As a developer, you do not have to apprehend the entire model complexity before yielding productivity.Limitation on standard Like in the JOOQ article about function and stored procedure extracting the meta-information can reach some limits when entering in vendor specific particularities.Native DSL There is a trend that states that SQL is the only DSL you need. Why should you wrap it with other technology abstraction that limit its power?Bye-Bye Domain Objects …Welcome DTO.  I/O are by essence DTO (why should they be the exact reflection of a persistence layer?). This situation is just a specific exception.  This exception is widely used by multiple apps/framework (strong resuability but limitations).  Remark: This article is not there to cover the ‘everlasting-debate’ DO vs. DTO. SDD just brings a new approach that do not exclude but complement the DDD/Rev-Eng one.  Concretely MinuteProject for release 0.8.1+ (mid-may-2012) will offer Statement-Driven-Development facilities.The user will focus one SQL statement. The output will be deduce by the execution of it. The input will be easily configurable.Example Here is a simple configuration. Statement-model is the new node.<model> ... <statement-model> <queries> <query name="get address street"> <query-body><value><![CDATA[select * from address where addressid > ?]]></value></query-body> <query-params> <query-param name="identifier_Address" is-mandatory="false" type="INT" sample="1"></query-param> </query-params> </query> </queries> </statement-model> </model>This configuration should be enough to get:Input bean Output bean Output list bean All the technology decoration ex for REST CXF:Resource bean with REST path spring config Native DAODemonstration will be shipped in Minuteproject next release (0.8.1).Integrating with REST is pretty much statement-driven: basically you just need to know the URL + its I/O.Conclusion  With SDD you focus on the statement and the I/O. Minuteproject simplifies the rest (technology wrapper).Reference: Statement driven development from our JCG partner Florian Adler at the minuteproject blog blog....
software-development-2-logo

Building security into a development team

Getting application developers to understand and take responsibility for software security is difficult. Bootstrapping an Appsec program requires that you get the team up to speed quickly on security risks and what problems they need to look for, how to find and fix and prevent these problems, what tools to use, and convince them that they need to take security seriously. One way to do this is to train everyone on the development team on software security.But at RSA 2011, Caleb Sima’s presentation Don’t Teach Developers Security challenged the idea that training application developers on software security will make a meaningful difference. He points out (rightly) that you can’t teach most developers anything useful about secure software development in a few hours (which as much Appsec training as most developers will get anyways). At best training like this is a long-term investment that will only pay off with reinforcement and experience – the first step on a long road. Most developers (he suggests as many as 90 out of 100) won’t take a strong interest in software security regardless. They are there to build stuff, that’s what they get paid for, that’s what they care about and that’s what they do best. Customers love them and managers (like me) love them too because they deliver, and that’s what we want them spending their time doing. We don’t want or need them to become AppSec experts. Only a few senior, experienced developers will “get” software security and understand or care about all of the details, and in most cases this is enough. The rest of the team can focus on writing good defensive code and using the right frameworks and libraries properly.Caleb Sima recommends starting an Appsec program by working with QA. Get an application security assessment: a pen test or a scan to identify security vulnerabilities in the app. Identify the top 2 security issues found. Then train the test team on these issues, what they look like, how to test for them, what tools to use. It’s not practical to expect a software tester to become a pen testing expert, but they can definitely learn how to effectively test for specific security issues. When they find security problems they enter them as bugs like any other bug, and then it’s up to development to fix the bugs.Get some wins this way first. Then extend security into the development team. Assign one person as a security controller for each application: a senior developer who understands the code and who has the technical skills and experience to take on security problems. Give them extra Appsec training and the chance to play a leadership role. It’s their job to assess technical risks for security issues. They decide on what tools the team will use to test for security problems, recommend libraries and frameworks for the team to use, and help the rest of the team to write secure code.What worked for usLooking back on what worked for our Appsec program, we learned similar lessons and took some of the same steps.While we were still in startup, I asked one of our senior developers to run an internal security assessment and make sure that our app was built in a secure way. I gave him extra time to learn about secure development and Appsec, and gave him a chance to take on a leadership role for the team. When we brought expert consultants in to do additional assessments (a secure design review and code review and pen testing) he took the lead on working with them and made sure that he understood what they were doing and what they found and what we needed to do about it. He selected a static analysis tool and got people to use it. He ensured that our framework code was secure and used properly, and he reviewed the rest of the team’s code for security and reliability problems. Security wasn’t his entire job, but it was an important part of what he did. When he eventually left the team, another senior developer took on this role. Most development teams have at least 1 developer who the rest of the team respects and looks to for help on how to use the language and platform correctly. Someone who cares about how to write good code and who is willing to help others with tough coding problems and troubleshooting. Who handles the heavy lifting on frameworks or performance engineering work. This is the developer that you need to take on your core security work. Someone who likes to learn about technical stuff and who picks new things up quickly, who understands and likes hard technical stuff (like crypto and session management), who makes sure that things get done right.Without knowing it we ended up following a model similar to Adobe’s “security ninja” program, although on a micro-scale. Most developers on the team are white belts or yellow belts with some training in secure software development and defensive programming. Our security lead is the black belt, with deeper technical experience and extra training and responsibility for leading software security for the application. Although we depended on external consultants for the initial assessments and to help us lay out a secure development roadmap, we have been able to take responsibility for secure development into the development team. Security is a part of what they do and how they design and build software today.This model works and it scales. If as a manager you look at security as an important and fundamental technical problem that needs to be solved (rather than a pain-in-the-ass that needs to be gotten over), then you will find that your senior technical people will take it seriously. And if your best technical people take security seriously, then the rest of the team will too.Reference: Building security into a development team from our JCG partner Jim Bird at the Building Real Software blog....
apache-hadoop-mapreduce-logo

MapReduce for dummies

Continuing the coverage on Hadoop component, we will go through the MapReduce component. MapReduce is a concept that has been programming model of LISP. But before we jump into MapReduce, lets start with an example to understand how MapReduce works. Given a couple of sentences, write a program that counts the number of words.Now, the traditional thinking when solving this problem is read a word, check whether the word is one of the stop words, if not , add the word in a HashMap with key as the word and set the value to number of occurrences. If the word is not found in HashMap, then add the word and set the value to 1. If the word is found, then increment the value and word the same in HashMap. Now, in this scenario, the program is processing the sentence in a serial fashion. Now, imagine if instead of a sentence, we need to count the number of words in encylopedia. Serial processing of this amount of data is time consuming. So, question is is there another algorithm we can use to speed up the processing. Lets take the same problem and divide the same into 2 steps. In the first step, we take each sentence each and map the number of words in that sentence.Once, the words have been mapped, lets move to the next step. In this step, we combine (reduce) the maps from two sentences into a single map.That’s it, we have just seen how, individual sentences can be mapped individually and then once mapped, can be reduced to a single resulting map.Advantage of the MapReduce approach isThe whole process got distributed in small tasks that will help in faster completion of the job Both the steps can be broken down into tasks. In the first, instance, run multiple map tasks, once the mapping is done, run multiple reduce tasks to combine the results and finally aggregate the resultsNow, imagine this MapReduce paradigm working on the HDFS. HDFS has data nodes that splits and store the files in blocks. Now, if map the tasks on each of the data nodes, then we can easily leverage the compute power of those data node machines.So, each of the data nodes, can run tasks (map or reduce) which are the essence of the MapReduce. As each data nodes stores data for multiple files, multiple tasks might be running at the same time for different data blocks. To control the MapReduce tasks, there are 2 processes that need to be understoodJobTracker – The JobTracker is the service within Hadoop that farms out MapReduce tasks to specific nodes in the cluster, ideally the nodes that have the data, or at least are in the same rack. TaskTracker – TaskTracker is a process that starts and tracks MapReduce Tasks in a cluster. It contacts the JobTracker for Task assignments and reporting results.These Trackers are part of the Hadoop itself and can be tracked easily viahttp://<host-name>:50030/ – web UI for MapReduce job tracker(s) http://<host-name>:50060/ – web UI for task tracker(s)Reference: MapReduce for dummies from our JCG partner Munish K Gupta at the Tech Spot blog....
java-interview-questions-answers

ADF: Backing bean scope in task flow finalizer

IntroductionThis is very common and recommended practice to use task flow finalizers when we need to do some final job (clean resources, close connections, etc) before the task flow is gone. As usual we work with managed beans declared inside the task flow. The managed beans can have different scopes – request, page flow, view, backing bean, etc. The scope depends on what the bean is actually used for. There is a small problem when we access to the backingBean scope managed bean in the finalizer. Let’s have a look at the example bellow. We have a bounded task flow with page fragments:And we have manged beans inside the task flow of three different scopes – page flow, view and backingBean:<managed-bean id="__3"> <managed-bean-name id="__5">FlowBean</managed-bean-name> <managed-bean-class id="__4">view.BackBean</managed-bean-class> <managed-bean-scope id="__2">pageFlow</managed-bean-scope> </managed-bean> <managed-bean id="__9"> <managed-bean-name id="__6">ViewBean</managed-bean-name> <managed-bean-class id="__7">view.BackBean</managed-bean-class> <managed-bean-scope id="__8">view</managed-bean-scope> </managed-bean> <managed-bean id="__10"> <managed-bean-name id="__11">BackBean</managed-bean-name> <managed-bean-class id="__12">view.BackBean</managed-bean-class> <managed-bean-scope id="__13">backingBean</managed-bean-scope> </managed-bean>On the page we have three buttons binded to managed beans of each scope:<af:commandButton text="commandButton 1" id="cb1" action="go" binding="#{backingBeanScope.BackBean.button}"> </af:commandButton><af:commandButton text="commandButton 1" id="cb2" binding="#{viewScope.ViewBean.button}"/><af:commandButton text="commandButton 1" id="cb3" binding="#{pageFlowScope.FlowBean.button}"/>The bean class has the button attribute and testString attribute that signals whether the button is assigned:private RichCommandButton button; public void setButton(RichCommandButton button) { this.button = button; }public RichCommandButton getButton() { return button; }public String getTestString() { if (this.button == null) return "The button is not assigned"; else return "The button is assigned"; }When we press cb1 we go to the return activity and the finalizer gets executed:public static String resolveExpression(String expression) { FacesContext fc = FacesContext.getCurrentInstance(); return (String) fc.getApplication().evaluateExpressionGet(fc, expression, String.class); }public void theFinalizer() { //Just to have test access to the managed beans //and to be sure we work with the same instances System.out.println(resolveExpression("#{pageFlowScope.FlowBean.testString}")+ " " + resolveExpression("#{pageFlowScope.FlowBean.button}")); System.out.println(resolveExpression("#{viewScope.ViewBean.testString}")+ " " + resolveExpression("#{viewScope.ViewBean.button}")); System.out.println(resolveExpression("#{backingBeanScope.BackBean.testString}")+ " " + resolveExpression("#{backingBeanScope.BackBean.button}")); }Run the application, press the cb1 button and see the following in the system log:The button is assigned RichCommandButton[UIXFacesBeanImpl, id=cb3] The button is assigned RichCommandButton[UIXFacesBeanImpl, id=cb2] The button is assigned RichCommandButton[UIXFacesBeanImpl, id=cb1]Everything seems to be ok. The task flow is finished and in the finalizer we work with correct managed bean instances. In this test the task flow is finished correctly using Return activity. And now let’s abandon our task flow – just go away from the page the task flow is put on. The finalizer is executed as well, and have a look at system out:The button is assigned RichCommandButton[UIXFacesBeanImpl, id=cb3] The button is assigned RichCommandButton[UIXFacesBeanImpl, id=cb2] The button is not assignedThis means that we work with different instance of the backingBeanScope.BackBean! In case of abounded task flow the controller don’t see correct backingBeanScope in the finalizer, it is empty and the controller create new instance of the BackBean. At the same time pageFlowScope and viewScope work perfect. So, be careful when you use backingBean scope managed beans within task flows, especially when you access them in finalizers. But in any case you can use the same trick described in the previous post.That’s it!Reference: Backing bean scope in ADF task flow finalizer from our JCG partner Eugene Fedorenko at the ADF Practice blog....
play-framework-logo

Troubleshooting Play Framework 2 apps on Openshift

Troubleshooting Openshift   With the do-it-yourself application type you really get a lot of freedom to support almost any framework or server that can be built and run on a linux box. But you do have to make your homework, and do some research. So in this article I’ll show you a couple of tips I learnt playing around with Openshift and Play Framework.  Comments are more than welcome, so I hope you can also provide me some more tips to help us all get our apps running on the cloud. Providing native support for play framework application   Right now, the solution we found for deploying Play 2.0 apps on openshift is quite handy, but we could make it a little better..The problem is that we have to compile the app locally (issuing play stage) and then push 30 MB of libraries to Openshift. The ideal thing, and that’s what we did with the Play 1.x quickstart and with the latest version of Openshift module for Play Framework 1.x, would be to just upload our sources and then let Openshift download and install Play, compile our app, and start it. Unfortunately we’ve ran with some memory constraints (seems like compiling Play 2 apps is a bit memory demanding) that eventually raised some issues. We are trying to work them out, but perhaps, with this tips, you could help has troubleshoot it. With the opensourcing of Openshift and the new Origin livecd we have more tools available for us to further investigate what’s going on, I just didn’t have time yet to start playing with it. So, enought chatter, and let’s get our hands dirty. Houston, we have a problem   All right, you’ve just read this guide or followed our steps on the Play Framework webinar using this Play 2.0 quickstart (in fact, some of this tips will help trouble shoot any app running on Openshift) and something went wrong. First of all, have a look at the logs. Just issuerhc app tail -a myapp -l mylogin@openshift.com -p mysecretpassLeave that window open, it will become quite handy later. Then we’ll ssh into our remote machine. Just issue:rhc app show -a myapp -l mylogin@openshift.com -p mysecretpassand you’ll get something likeApplication Info ================ contacts Framework: diy-0.1 Creation: 2012-04-19T14:20:16-04:00 UUID: 0b542570e41b42e5ac2a255c316871bc Git URL: ssh://0b542570e41b42e5ac2a255c316871bc@myapp-mylogin.rhcloud.com/~/git/myapp.git/ Public URL: http://myapp-mylogin.rhcloud.com/Embedded: NoneTake the part after the ssh of the Git URL stuff, and log into you openshift machine:ssh 96e487d1d4a042f8833efc696604f1e7@myapp-mylogin.rhcloud.com(If you are lazy like me, go on and vote for an easier way to ssh into openshift) It’s also a good idea to open another command window, ssh into openshift, and run something like “top” or “watch -n 2 free -m” to keep an eye on memory usage. Troubleshooting Play   You know the old motto, “write once, run everywhere”… well it just “should” work, but just in case you could try compiling your app with the same JDK version as the one running on openshift. Just runjava -version java version "1.6.0_22" OpenJDK Runtime Environment (IcedTea6 1.10.6) (rhel-1.43.1.10.6.el6_2-i386) OpenJDK Server VM (build 20.0-b11, mixed mode)And install the same jdk version on your box. Then compile your app and redeploy (you can use the convenience script openshift_deploy) If that doesn’t work, try to do the whole process manually on Openshift. You should do something like this:# download play cd ${OPENSHIFT_DATA_DIR} curl -o play-2.0.1.zip http://download.playframework.org/releases/play-2.0.1.zip unzip play-2.0.1.zip cd ${OPENSHIFT_REPO_DIR}#stop app .openshift/action_hooks/stop#clean everything - watch for errors, if it fails retry a couple more times ${OPENSHIFT_DATA_DIR}play-2.0.1/play cleanif you get something like:/var/lib/stickshift/0b542570e41b42e5ac2a255c316871bc/myapp/data/play-2.0.1/framework/build: line 11: 27439 KilledIt means it failed miserably (that’s the memory problem I told you about) And it’s such a bad tempered error that you’ll also loose you command prompt. Just blindily type “reset” and hit enter, you’ll get your prompt back. And then just try again… You might also get this message:This project uses Play 2.0! Update the Play sbt-plugin version to 2.0.1 (usually in project/plugins.sbt)That means you created the app with Play 2.0 and you are now trying to compile it with a different version. Just update project/plugins.sbt file or download the appropiate version. Now compile and stage your app.#compile everything - watch for errors, if it fails retry a couple more times ${OPENSHIFT_DATA_DIR}play-2.0.1/play compile#stage - watch for errors, if it fails retry a couple more times ${OPENSHIFT_DATA_DIR}play-2.0.1/play stageThen run it (don’t be shy and have a look at the action hooks scripts on the quickstart repo).target/start -Dhttp.port=8080 -Dhttp.address=${OPENSHIFT_INTERNAL_IP} -Dconfig.resource=openshift.confGo check it at https://myapp-mylogin.rhcloud.com If everything works ok, just stop it with ctrl-c, and then run:.openshift/action_hooks/startYou should see your app starting in the console with the logs files Now you can log out from the ssh session with ctrl-d, and issue:rhc app restart -a myapp -l mylogin@openshift.com -p mysecretpassand you should see something likeStopping play application Trying to kill proccess, attempt number 1 kill -SIGTERM 19128 /var/lib/stickshift/0b542570e41b42e5ac2a255c316871bc/openbafici/repo/target/start "-DapplyEvolutions.default=true" -Dhttp.port=8080 -Dhttp.address=127.11.189.129 -Dconfig.resource=openshift.conf Play server process ID is 21226 [info] play - Application started (Prod) [info] play - Listening for HTTP on port 8080...I hope this tips will become useful. As I told, I’m looking forward to start playing with the Openshift Origin livecd, and then I’ll tell you about. In the meantime I’ll leave you with the company of the good old Openshift Rocket Bear, I know you miss him too, so why not get him back?Reference: Troubleshooting Play Framework 2 apps on Openshift from our JCG partner Sebastian Scarano at the Having fun with Play framework! blog....
apache-maven-logo

Maven Build Dependencies

Maven and Gradle users who are familiar with release and snapshot dependencies may not know about TeamCity snapshot dependencies or assume they’re somehow related to Maven (which isn’t true). TeamCity users who are familiar with artifact and snapshot dependencies may not know that adding an Artifactory plugin allows them to use artifact and build dependencies as well, on top of those provided by TeamCity. Some of the names mentioned above seem not to be established enough while others may require a discussion about their usage patterns. Having this in mind I’ve decided to explore each solution in its own blog post, setting a goal of providing enough information so that people can choose what works best. This first post explores Maven snapshot and release dependencies. The second post covers artifact and snapshot dependencies provided by TeamCity, and the third and final part will cover artifact and build dependencies provided by the TeamCity Artifactory plugin. Internal and External Dependencies Build processes may run in total isolation by checking out the entire code base and building an application from scratch. This is the case for projects where relevant binary dependencies (if there are any) are kept in VCS together with the project sources. However, in many other cases build scripts rely on internal or external dependencies of some sort. Internal dependencies are satisfied by our own code where we have full control over the project which can be split into multiple modules or sub-projects. External dependencies are satisfied by someone else’s code (on which we have no control) and we consume it or use it as clients. This can be a third-party library such as Spring or a component developed by another team. This distinction is important since internal and external dependencies are usually accompanied by different release and upgrade cycles: internal dependencies may be modified, rebuilt and updated on an hourly basis while external dependencies’ release cycle is significantly slower with users applying the updates even less frequently, if at all. This is largely driven by the fact that internal dependencies are under our own control and have a narrow-scoped impact, limited by a specific project or module while external dependencies can only be used as-is, their impact is potentially company or world-wide, they are not scoped by any project and can be used anywhere. Naturally, this requires significantly higher standards of release stability, compatibility and maturity, hence slower release and update cycles. Another aspect of “internal vs. external” dependency characteristics is expressed in how their versions are specified in a build script. Internal dependencies are usually defined using snapshot versions while external dependencies use release versions. The definition of “snapshot” and “release” versions was coined by Maven, which pioneered the idea of managing dependencies by a build tool. If you’re familiar with automatic dependencies management feel free to skip the following section which provides a quick overview of how it works. Automatic Dependencies Management In Maven dependencies are specified declaratively in a build script, an approach later followed by newer build tools such as Gradle, Buildr and sbt. Maven: <dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-all</artifactId> <version>1.8.6</version> <scope>compile</scope> </dependency>Gradle: compile "org.codehaus.groovy:groovy-all:1.8.6"Buildr: compile.with "org.apache.axis2:axis2:jar:1.6.1"sbt: libraryDependencies += "org.twitter4j" % "twitter4j-core" % "2.2.5"Every dependency is identified by its coordinates and scope. Coordinates unambiguously specify the library and version used, while scope defines its visibility and availability in build tasks such as compilation or tests invocation. For instance, "compile org.codehaus.groovy:groovy-all:1.8.6" would designate a Groovy "org.codehaus.groovy:groovy-all" distribution for version "1.8.6", used for source compilation and test invocation. Switching the scope to “test” or “runtime” would then narrow down the library visibility to tests-only or runtime-only, respectively. When a build starts, dependencies are either located in a local artifacts repository managed by a build tool (similar to a browser cache) or downloaded from remote repositories, either public or private, such as Maven Central, Artifactory or Nexus. The build tool then adds the artifacts resolved to the corresponding classpaths according to their scopes. When assembling build artifacts, such as "*.war" or "*.ear" archives, all required dependencies are correctly handled and packaged as well. Though dependencies management seems to be an essential part of almost any build, not all build tools provide a built-in support for it: Ant and MSBuild lack this capability, a gap later addressed by Ivy and NuGet to some extent. However, Ivy’s adoption was slower compared to Maven, while NuGet is a .NET-only tool. Over time, Maven artifact repositories and Maven Central have become a de facto mechanism for distributing and sharing Java artifacts. Being able to resolve and deploy these using Maven repositories has become a “must have” ability for all newer Java build tools. Release and Snapshot Dependencies As I mentioned previously, internal dependencies are normally defined using snapshot versions while external dependencies use release versions. Let’s look into release versions first as they are easier to reason about. Release dependencies are those which have a fixed version number, such as the "1.8.6" version of the Groovy distribution. Whatever artifact repository is used by the build and whenever it attempts to locate this dependency, it is always expected to resolve the exact same artifact. This is the main principle of release dependencies: “Same version = same artifact”. Due to this fact, build tools do not check for a release dependency update once it is found and will only re-download the artifact if the local cache was emptied. And this all makes sense, of course, since we never expect to find divergent artifacts of the same library carrying an identical version number! Snapshot dependencies are different and, as a result, way trickier to deal with. Snapshot dependency versions end with a special "-SNAPSHOT" keyword, like "3.2.0-SNAPSHOT". This keyword signals the build tools to periodically check an artifact with a remote repository for updates; by default, Maven performs this check on a daily basis. The function of snapshot dependencies, then, is to depend on someone else’s work-in-progress (think “nightly builds”): when product development moves from version "X" to version "X+1" its modules are versioned "X+1-SNAPSHOT". Snapshot Dependencies Uncertainty If the main principle of release dependencies was “Same version = same artifact” (after version ‘X’ of library is released, its artifacts are identical all around the world, forever), snapshot dependencies’ principle is “Same version = ever-updating artifact”. The benefit of this approach is that it enables retrieving frequent updates without the need to produce daily releases which would be highly impractical. The downside of it, however, is uncertainty – using snapshot dependencies in a build script makes it harder to know which version was used in a specific build execution. My "maven-about-plugin" stores a textual “about” file in every snapshot artifact in order to better identify its origins such as VCS revision and build number; this can be helpful but it only solves half of the problem. Being a moving target by their definition, snapshot dependencies do not allow us to pin down versions on which we depend and therefore build reproducibility becomes harder to achieve. Also, in a series or pipeline of builds (when a finished build triggers an invocation of subsequent ones), an artifact produced by initial pipeline steps is not necessarily consumed by the closing ones as it may be long overridden since then by other build processes running at the same time. One possible approach in this situation is to lock-down a dependency version in a build script using timestamp so it becomes "3.2.0-20120119.134529-1" rather than "3.2.0-SNAPSHOT". This effectively makes snapshot dependencies identical to release dependencies and disables an automatic update mechanism, making it impossible to use an up-to-date version even when one is available unless the timestamp is updated. As you see, snapshot dependencies can be used where it makes sense but it should be done with caution and in small doses. If possible, it is best to manage a separate release lifecycle for every reusable component and let its clients use periodically updated release dependencies. Summary This article has provided an overview of automatic dependencies management by Java build tools together with an introduction to Maven release and snapshot dependencies. It also explained how snapshot dependencies’ advantages become debatable in the context of build reproducibility and build pipelines. The following blog posts will explore the TeamCity build chains and Artifactory build isolation which allow to use consistent, reproducible and up-to-date snapshot versions throughout a chain of builds without locking down their timestamps in a build script. More to come! Reference: Maven Build Dependencies from our JCG partner Evgeny Goldin at the Goldin++ blog....
eclipse-logo

The Eclipse Common Build Infrastructure

Creating an Common Build Infrastructure (CBI) at Eclipse has been one of our holy grail pursuits over the years. My main goal in this effort has been to get Eclipse committers out of the build business so that more time and energy can be focused on actually writing software (which is what most software developers would rather spend their time doing). When we talk about “build”, we mean the process of taking source code and turning it into a form that adopters and users can download and consume. Building software in general–and building Eclipse plug-ins/OSGi bundles in particular–is a relatively hard problem. The original CBI at Eclipse used a combination of PDE Build, black magic, cron, and manual intervention in the form of pulled hairs. Our first attempt at trying to make this easier for Eclipse projects resulted in the Athena Common Build. Athena added a layer on top of PDE Build that made it quite a lot easier to build Eclipse plug-ins, features, and update sites. The community owes a debt of gratitude to Nick Boldt for all the hard work he did to implement Athena, and his tireless efforts help projects to adopt it. Around the same time that Athena came into being, we brought Hudson into the picture to provide proper build orchestration. Builds at Eclipse continued to evolve. The Buckminster project, which focuses on assembling artifacts, waded into the build space. The B3 project, a third-generation build based on Modeling technology, was created. At one point, a new project at Eclipse had lots of different choices for build. Then the push to Maven started. For years many vocal members of the community complained that Maven couldn’t be used to build Eclipse bundles. It was Eclipse’s fault. It was Maven’s fault. There were a many false starts. But everything changed with Tycho. Tycho makes Maven understand how to build Eclipse plug-ins. Tycho facilitates a couple of useful things: first, it allows you to do “manifest-first” builds in which your Maven pom leverages the OSGi dependencies specified by an Eclipse bundle; second, it enables Maven to resolve dependencies found in p2 repositories. It does more than this, but these are the big ones. Unfortunately, we haven’t found a good way to track the rate of migration. But in my estimation, Eclipse projects and many others in the adopter community are flocking to it. The combination of Hudson, Maven, and Tycho seems to be delivering the holy grail. I managed to get up and running on Maven/Tycho in a matter of minutes and haven’t thought about the build since. For projects delivering a handful of features and bundles, it’s dirt-easy to get started and maintain the build. There are a still few rather large corner cases that need to be addressed. For example, we have a team working on moving the Eclipse project’s build over to the CBI. The Eclipse project’s build is about as hard as it gets. The CBI has evolved and will continue to evolve. Our noble Webmaster recently added a new interface for signing build artifacts with the Eclipse certificate. We have ongoing work to develop a standard “parent pom” for Eclipse projects, and even a Maven repository where Eclipse projects to push their build results for dissemination to the general public. So the CBI at Eclipse seems to be stabilising around these technologies. But, I have no doubt that it will continue to evolve, especially as more and more projects start to consider implementing continuous build strategies combining Gerrit and Hudson. Reference: The Eclipse Common Build Infrastructure from our JCG partner Wayne Beaton at the Eclipse Hints, Tips, and Random Musings blog....
jboss-hibernate-logo

ORM Haters Don’t Get It

I’ve seen tons of articles and comments (especially comments) that tell us how bad, crappy and wrong is the concept of ORM (object-relational mapping). Here are the usual claims, and my comments to them:“they are slow” – there is some overhead in mapping, but it is nothing serious. Chances are you will have much slower pieces of code. “they generate bad queries which hurts performance” – first, it generates better queries than the regular developer would write, and second – it generates bad queries if you use bad mappings “they deprive you of control” – you are free to execute native queries “you don’t need them, plain SQL and XDBC is fine” – no, but I’ll discuss this in the next paragraph “they force you to have getters and setters which is bad” – your entities are simple value objects, and having setters/getters is fine there. More on this below database upgrade is hard – there are a lot of tools around the ORMs that make schema transition easy. Many ORMs have these tools built-inBut why do you need an ORM in the first place? Assume you decided not to use one. You write your query and get the result back, in the form of a ResultSet (or whatever it looks like in the language you use). There you can access each column by its name. The result is a type unsafe map-like structure. But the rest of your system requires objects – your front-end components take objects, your service methods need objects as parameters, etc. These objects are simple value-objects, and exposing their state via getters is nothing wrong. They don’t have any logic that operates on their state, they are just used to transfer that state. If you are using a statically-typed language, you are most likely using objects rather than type-unsafe structures around your code, not to mention that these structures are database-access interfaces, and you wouldn’t have them in your front-end code. So then a brilliant idea comes to your mind – “I will create a value object and transfer everything from the result set to it. Now I have the data in an object, and I don’t need database-access specific interfaces to pass around in my code”. That’s a great step. But soon you realize that this is a repetitive task – you are creating a new object and manually, field by field, transferring the result from your SQL query to that object. And you devise some clever reflection utility that reads the object fields, assumes you have the same column names in the DB, reads the result set and populates the object. Well, guess what – ORMs have been doing the same thing for years and years now. I bet theirs are better and work in many scenarios that you don’t suspect you’ll need. (And I will just scratch the surface of how odd is the process of maintaining native queries – some put them in one huge text file (ugly), others put them inline (how can the DBAs optimize them now?))To summarize the previous paragraph – you will create some sort of ORM in your project, but yours will suck more than anything out there, and you won’t admit it’s ORM.This is a good place to mention an utility called commons-dbutils (Java). It is a simple tool to map database results to objects that covers the basic cases. It is not an ORM, but it does what an ORM does – maps the database to your objects. But there’s something missing in the basic column-to-field mapper, and that’s foreign keys and joins. With an ORM you can get the User’s address in an Address field even though a JOIN would be required to fetch it. That’s both a strength and a major weakness of ORMs. The *ToOne mappings are generally safe. But *ToMany collections can be very tricky, and they are very often misused. This is partly the fault of ORMs as they don’t warn you in any way about the consequences of mapping a collection of, say, all orders belonging to a company. You will never and must never need to access that collection, but you can map it. This is an argument I’ve never heard from ORM haters, because they didn’t get to this point.So, are ORMs basically dbutils plus the evil and risky collection mapping? No, it gives you many extras, that you need. Dialects – you write your code in a database-agnostic way, and although you are probably not going to change your initially selected database vendor, it is much easier to use any database without every developer learning the culprits if its syntax. I’ve worked with MSSQL and Oracle, and I barely felt the pain in working with them. Another very, very important thing is caching. Would you execute the same query twice? I guess no, but if it happens to be in two separate methods invoked by a third method, it might be hard to catch, or hard to avoid. Here comes the session caching, and it saves you all duplicated queries to get some row (object) from the database. There is one more criticism to ORMs here – the session management is too complicated. I have mainly used JPA, so I can’t tell about others, but it is really tricky to get the session management right. It is all for very good reasons (the aforementioned cache, transaction management, lazy mappings, etc.), but it is still too complicated. You would need at least one person on the team that has a lot of experience with a particular ORM to set it up right.But there’s also the 2nd level cache, which is significantly more important. This sort of thing is what allows services like facebook and twitter to exist – you stuff your rarely-changing data in (distributed) memory and instead of querying the database every time, you get the object from memory, which is many times faster. Why is this related to ORMs? Because the caching solution can usually be plugged into the ORM and you can store the very same objects that the ORM generated, in memory. This way caching becomes completely transparent to your database-access code, which keeps it simple and yet performant.So, to summarize – ORMs are doing what you would need to do anyway, but it is almost certain that a framework that’s been around for 10 years is better than your homegrown mapper, and they are providing a lot of necessary and important extras on top of their core functionality. They also have two weak points (they both practically say “you need to know what you are doing”):they are easy to misuse, which can lead to fetching huge, unnecessary results from the database. You can very easily create a crappy mapping which can slow down your application. Of course, it is your responsibility to have a good mapping, but ORMs don’t really give you a hand there their session management is complicated, and although it is for very good reasons, it may require a very experienced person on the team to set things up properlyI’ve never seen these two being used as arguments against ORMs, whereas the wrong ones in the beginning of this article are used a lot, which leads me to believe that people raging against ORMs rarely know what they are talking about.Reference: ORM Haters Don’t Get It from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close