Featured FREE Whitepapers

What's New Here?


My walk through the Git book

I’ve been experimenting with git for about the last year, but most of the work I did with it so far was in the “single developer, hack some stuff, push to github” mode of operation, which is very superficial. Now that I’ll be working with it full time (git is one of the “semi wildly adopted” SCMs at Google), I thought it’s time to take a closer look at some wisdom accumulated by other folks, so I finally cracked open the Git book and did a pass over it. The book is great and usually very fluid. It begins by show-casing the simple use cases you’ll encouter with git, and is filled with short code snippets you can try (even on a train with no WiFi – this is a distributed source control system after all). Some of the examples weren’t crystal clear straight out of the box, and relied on some previous knowledge the authors had (after all, much of the book was pulled together from different sources, so I imagine it was relatively easy to accidentally assume a bit of knowledge that its readers don’t necessarily have at that point). Here is a summary of questions I had while reading the book, followed by some cool stuff I found at the end. I recommend at least some knowledge of git for the rest of this article, best accompanied with a reading of the Git book itself. As usual, if you find a mistake, please let me know. Some more related recommended reading is the Git for beginners SO question. What happens on double git add? git add is used not just to add new files, but also to ‘add’ changes in existing files. When I do: echo v1 > foo git add foo echo v2 > foo git add foo git commit -m barAre both versions of foo added to the commit log, or just the latest? The answer is that just the latest version is actually committed. After I git merge without conflicts, is a git commit needed? Coming from svn it was my expectation that after I merge changes into my local branch, I will have to commit them. Doing a quick experiment showed that in git this is not the case at all – if a merge is resolved without manual intervention (including concurrent edits to different places of the same file), then no commit is needed. If there are any conflicts that are resolved manually (by git adding the file after fixing the merge), then a git commit is required. How does gitk work? Sometimes I see branches, sometimes I don’t … it’s very confusing This one has been puzzling me for quite a long time. I found that I couldn’t trust gitk, the graphical tool for visualizing commits, branches and merges, because it kept giving me inconsistent results, and for the life of me I couldn’t understand why. Now I did a few experiments and digging, and found that by default gitk will only show you the current branch, and any objects that are its descendants in the version graph. If you create a branch, switch back to master, and ran gitk, you would not see this branch. What confused me is that upon refreshing, gitk rescans the current branch and add any new nodes to its display, while retaining anything alreaday shown – meaning if you run gitk, switch to a new branch, and refresh gitk, the new branch and its relation to the previous will now be displayed in gitk. Of course, like all things linux, gitk can be controlled to behave like you want it. Just follow the gitk command with the names of the branches you want shown, or simply add “–all” to see all the branches in your repository. How can you see the ‘branch structure’ of a repository? In svn, there is a well defined directed graph between branches. When a branch is created of its parent, this parent-child relation is created and maintained, and the tools readily show you this branch graph. I could have guessed this, but sources on Stack Overflow confirmed that there is no direct equivalent in git. Instead of branches having parent-child relations, there is a parent-child relation between objects, and so individual files and directories can have multiple parents in the version graph, where other files on the same branch might have completely linear histories. The model is more complex, but more powerful, and it seems to be the core reasons why merges in git are supposed to be easier than in svn. What does ‘fast forward’ really mean? Using git, I often saw messages with the words “fast forward”, but never really understood what it meant. This bit is explained rather nicely in the Git book – a fast forward happens when you merged branch b1 to b2, resolved any possible conflicts, and then merge the result back to b1. b2 already contains a version that is a descendant of the “heads” of both b1 and b2, meaning all the “merge work” was already done in it. So, when this structure is merged back to b1, what actually happens is all the revisions and merge work that happened on b2 is copied to b1. After this copying, the b1 branch (a pointer into the revision DAG) is “fast forwarded” to a descendant node that is the head of b2. In effect, the merge’s result becomes the head of b1 in a clean and simple manner. This is radically different than svn – I still have horror flashbacks sometimes about trying to merge a branch back to trunk. I always first merged trunk to the branch, had to work my ass off to resolve all the conflicts and make the build green, and then sometimes had to do double the work when merging back to trunk. With git, you’re assured that the conflict resolution work you do on your branch is presereved and used to make merging back to master (the git equivalent of trunk) is as easy as cake. git pull, fetch, and what’s in between It is said that “git pull” is equivalent to “git fetch”, followed by “git merge”. The ability to immediately fetch all the content of any remote repository without forcing you to merge it right now is great – you’re free to do the actual merge work and conflict resolution separately, and you only need connectivity to the remote repository for the fetch phase. When I tried this using two local folders, git merge complained, and I failed to understand what arguments I should pass to “git merge” in this case? This turned out to be a simple technical issue. To merge the changes manually after fetching from an arbitrary remote, simply run git merge FETCH_HEAD (sometimes you just have to know the magic words). Normally, you would fetch from origin (usually the branch you cloned off), or another remotely tracked named branch, so you would just specify its name as the parameter to “git merge”. How does pushing actually work? Let’s say I setup a local “common” repo (it has to be bare for reasons explained in the Git book) mkdir bare cd bare git init --bare cd .. git clone bare alice cd alice touch a && git add a && git commit -m "Added a" git push # This failsWhy does the push fail? It turns out that the problem was I tried to push to an empty repository. If I do “git push origin master”, then subsequent “git push” with no arguments succeed. And now, for some cool stuff: git bisect ftw Suppose you just found a critical bug, and have no idea when it was introduced. You write a simple (manual/automated) test for it, and reproduce it, but you’re not sure what it causing it. git bisect to the rescue! git bisect allows you to do a binary search on your repository to find the exact commit that introduced the bug. While this is possible with other VCSs, it is so natural in git that it’s beautiful. You simply do “git bisect start”, followed by “git bisect good” to indicate the current version works, and “git bisect bad” to indicate it doesn’t, and git will direct you towards the correct half of the version graph until you find the exact version when things turned bad. Configure your defaults for fun and profit Here are some tweaks I found in the book that you might want to do (if you have any other tweaks you’d like to recommend, please comment!) oneline log messages If, like me, you find the “one liner” log messages easier to read, you can make it the default with git config –global format.pretty oneline Life is colorful Make git status and other messages much easier to read with git config –global color.ui true Reference: My walk through the Git book from our JCG partner Ron Gross at the A Quantum Immortal blog....

Integration testing scoped beans in CDI 1.0 and Spring 3.1

In this blog post I describe how to do integration testing with scoped beans in Spring and CDI. Everything is illustrated with small code samples. Integration testing with scopes is not particular easy. Imagine a bean that lives in the session scope, like UserCredentials. In an integration test you typically have no HttpRequest or HttpSession to work on (at least if you are not doing tests that include your user interface). Therefore you need some infrastructure for integration testing. With both technologies it is a little puzzling to get this infrastructure going. Get your own picture of it. If you are new to scope and context in CDI and Spring check out the basics and get an overview about the different scopes. Integration testing scoped beans in Spring In Spring 3.1 there is no integration test support for scoped session or request beans (see here). It is scheduled for Spring Version 3.2. However, this link explains a solution that worked for me. First you need to develop a SessionScope for the test. It’s purpose is to Mock a HttpRequest and a HttpSession. package com.mycompany.springapp.scope;import org.springframework.beans.factory.InitializingBean; import org.springframework.mock.web.MockHttpServletRequest; import org.springframework.mock.web.MockHttpSession; import org.springframework.web.context.request.RequestContextHolder; import org.springframework.web.context.request.ServletRequestAttributes; import org.springframework.web.context.request.SessionScope;public class SetupSession extends SessionScope implements InitializingBean {public void afterPropertiesSet() throws Exception { MockHttpServletRequest request = new MockHttpServletRequest(); MockHttpSession session = new MockHttpSession(); request.setSession(session); RequestContextHolder.setRequestAttributes(new ServletRequestAttributes( request)); }}To register that class as the session scope management object in your test-beans.xml do this:Notice that I registered the scopes after the context:component-scan tag. Finally, I wrote my test class: import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.util.Assert;@ContextConfiguration("/test-beans.xml") @RunWith(SpringJUnit4ClassRunner.class) public class MyScopeBeanTest { @Autowired private MyScopeBean myScopeBean; @Test public void testBeanScopes() { Assert.isTrue(myScopeBean.getMyCustomScopedService().getName().equals("Test")); Assert.isTrue(myScopeBean.getMySessionScopedService().getName().equals("Test")); }}Notice that I have called a method getName() on the scoped bean. This is necessary to ensure that scoping works. The client proxy may get injected at the injection point, but if you make a call to the proxy it does not have a reference to a scope object and a collaborating object respectively. Integration testing scoped beans with CDI The tool I used for integration testing CDI is Arquillian. There are alternatives. You could use Weld “natively” if you only test with CDI classes. But if you also have EJB, that’s not sufficient. Arquillian comes with a reasonable amount of transitive dependencies. Let’s see how to get the stuff going. Note: without Maven you’re lost in the desert here, so I encourage you to use it! I have tried m2eclipse for Helios, it did not work for me, I went back to good old command line using Maven 3. Changes to your pom.xml file These samples assume you have a Java EE project working, you can also see here how to set-up a new Java EE 6 project. To integrate Arquillian make the following changes to your pom.xml file: In the properties section:1.0.0.Alpha5Add this repository:repository.jboss.orghttp://repository.jboss.org/nexus/content/groups/publicdefault true never warn false always warnThis is the official JBoss Maven repository where all the Arquillian distributions are available. Add the following dependencies to your pom.xml: junit junit 4.8.1 testorg.jboss.arquillian arquillian-junit ${arquillian.version} testorg.jboss.arquillian.container arquillian-glassfish-remote-3.1 ${arquillian.version} testjavax.enterprise cdi-api 1.0-SP4 testThe first dependency is your JUnit framework to write integration tests. The second dependency integrates Arquillian with JUnit. The third dependency integrates your deployment container. For me that is my Glassfish installation. The last dependency is the CDI API that needs to be available for your CDI tests. Notice in the line 17, I am using my Glassfish 3.1 installation as deployment container and Arquillian uses remote calls to perform the tests. You need to configure your own deployment environment here. See the JBoss Maven Repo for the correct artifactId value. With Arquillian your target environment can also be an embedded container such as JBoss Embedded AS, GlassFish Embedded or Weld SE. In that case, you don’t need a seperate container installation and remote calls, all is running locally (“in-memory”). You do a mvn eclipse:eclipse after you added the dependencies for your target environment. Writing and executing a test with Arquillian and JUnit Finally I wrote my first Arquillian integration test class: import javax.inject.Inject;import junit.framework.Assert;import org.jboss.arquillian.api.Deployment; import org.jboss.arquillian.junit.Arquillian; import org.jboss.shrinkwrap.api.ArchivePaths; import org.jboss.shrinkwrap.api.ShrinkWrap; import org.jboss.shrinkwrap.api.asset.EmptyAsset; import org.jboss.shrinkwrap.api.spec.JavaArchive; import org.junit.Test; import org.junit.runner.RunWith;import com.mycompany.jeeapp.scope.MyApplicationService; import com.mycompany.jeeapp.scope.MyConversationService; import com.mycompany.jeeapp.scope.MyDefaultService; import com.mycompany.jeeapp.scope.MyRequestService; import com.mycompany.jeeapp.scope.MyScopeBean; import com.mycompany.jeeapp.scope.MySessionService; import com.mycompany.jeeapp.scope.MySingletonService; import com.mycompany.jeeapp.scope.extension.MyCustomScopeService;@RunWith(Arquillian.class) public class MyArquillianJUnitTest {@Inject private MyScopeBean myScopeBean;@Deployment public static JavaArchive createTestArchive() { return ShrinkWrap .create(JavaArchive.class, "test.jar") .addClasses(MyScopeBean.class,MyApplicationService.class, MyConversationService.class, MyDefaultService.class, MyRequestService.class, MySessionService.class, MySingletonService.class, MyCustomScopeService.class) .addAsManifestResource(EmptyAsset.INSTANCE, ArchivePaths.create("beans.xml")); }@Test public void testScopedBeans() { Assert.assertTrue(myScopeBean.getApplicationService().getSomeName() .equals("myName")); Assert.assertTrue(myScopeBean.getApplicationServiceWithNew().getSomeName() .equals("myName")); Assert.assertTrue(myScopeBean.getCustomScopeService().getSomeName() .equals("myName")); Assert.assertTrue(myScopeBean.getDefaultService().getSomeName() .equals("myName")); Assert.assertTrue(myScopeBean.getRequestService().getSomeName() .equals("myName")); Assert.assertTrue(myScopeBean.getSessionService().getSomeName() .equals("myName")); Assert.assertTrue(myScopeBean.getSingletonService().getSomeName() .equals("myName")); }} Conclusion Spring does not offer an integrated test support for scoped beans at the moment. This was very surpising as Spring always attached major importance to all test topics. There is a workaround that I have described in my blog. It wasn’t difficult to make that work. Full integration test support is scheduled for Release 3.2 M1. CDI scoped beans testing is enabled with Arquillian. I had some problems during set-up (see last paragraph below) which I think is usual if you use a new technology. The fact that you have to pass all the beans under test to the archive (see @Deployment method) is something I need to try in a large project: is that really a good idea? Sometimes, large applications are wired together with dozens of beans from different packages. It’s difficult to predict which beans are used in an integration test. Problems and solutions Some Arquillian set-ups come with so many dependencies that you cannot use standard Eclipse launch configurations. The command line argument that is generated exceeds the Windows length limit for command line instructions. Therefore I have used an Ant-Script to start my test. The script is just for illustration. You have to build your own own Ant script. You can get your classpath information as follows: in Eclipse, go to “File > Export > General > Ant buildfiles” to generate your classpath information. Take this classpath info and drop it into your Ant JUnit test start script. I have documented my complete Ant script here. When I started this Ant script then everything worked fine for me. If you have any issues let me know, you can look into your test results file and into the server.log to analyse. More error messages during Arquillian set-up WELD-001303 No active contexts for scope type javax.enterprise.context.ConversationScoped -> ConversationScope is bound to JSF by EE spec. So they won’t be active during a normal HTTP request which is what Arquillian is piggybacking on. POST http://localhost:4848/management/domain/applications/application returned a response status of 403 -> 404/403 errors could be deployment issues, check server.log for the root cause (mine was that I did not have all the required classes added to the test.jar) Exception occurred executing command line. Cannot run program “D:\dev_home\java-6-26\bin\javaw.exe” (in directory “D:\dev_home\repositories\git\jee-app-weld\jee-app-weld”): CreateProcess error=87, Falscher Parameter -> Classpath exceeds allowed length for windows command line operations. You need to use an Ant script or Maven to run the tests. ValidationException: DeploymentScenario contains targets not maching any defined Container in the registry -> see here. WELD-000072 Managed bean declaring a passivating scope must be passivation capable. Bean: Managed Bean [class com.mycompany.jeeapp.scope.example.UserCredentials] with qualifiers [@Any @Default] -> You need to implement Serializable on Session- and Conversation-Scoped beans. DeploymentScenario contains targets not maching any defined Container in the registry. _DEFAULT_ -> see here. java.net.ConnectException: Connection refused: connect -> Your remote Java EE server installation is not running, start it! Reference: “Integration testing scoped beans in CDI 1.0 and Spring 3.1″ from our JCG partner Niklas....

Small things around Oracle Weblogic 11g (10.3.4)

I am doing a lot of setting up and configuration for Weblogic this week (devops I guess). I have been working with Weblogic for the past 4 years and I have to admit – like Eclipse – I have started getting used of it. I was a Netbeans / JBoss developer and I’ve turned to Eclipse/ Weblogic guy. I am writing this post as a reference of things I found during research and some small technical problems encountered setting up the server on different environments (this is ongoing work since I will try RHEL in the fews days).General Comment regarding Weblogic My first try and experience with Weblogic was with version 9. Still a BEA trademark – I could not really see any point on why it was considered better comparing to JBoss. It had a quite good admin console but that is all about it. This feeling did not change through the early version 10.x releases. In the past 2 years I was working with 10.3.1. In general it was stable enough, it had WLST (which is really handy if you manage to master your Jython skills) but was still a J2EE 5 container and IMHO had lots of medium to small sized bugs that could drive you crazy. Some of them would be resolved through this slow paced release cycle. Now I am trying 10.3.4, I can see many improvements on start-up times, less warnings and errors plus boarder support for ‘other’ operating systems (for example I can now run almost error free on MacOSX the server). Still a pure J2EE5 container with many extra technologies provided by Oracle like Coherence (which i am not going to use or enable anyway). Overall, things have improved, it is not that bad anymore (there are still worse containers on production out there) and I am really curious (eager) to work on the very latest version 12c.Installing Weblogic on a Mac: This is an old trick – so I am just re-posting. From 10.3. and on (if I remember well) we had generic distributable as a jar provided,  so it could be used on MacOSX or other OSes. When you are running the installer at some point you are asked to provide a suitable JDK. Up until JDK 6 which is the latest supported version on MacOSX – when you were indicating the path from your Mac’s system library the installer was arguing that this was not a valid path. It was searching for a /jre folder within the jdk folder.In order to make it work you just had to follow this trick (create the jre folder on your own and touch the required libs). Note the following:that this is expected tochange with JDK7, for example on the openJDK builds currently installed on my MacBookPro the jdk folder has a /jre  subfolder in place. Maybe Oracle fixes the generic java installer to support MacOSX jdk folder format who knows.Installing Weblogic 11g on 64bit OS Ok that was fun but scary. I installed Weblogic 10.3.4 (11g) using the small zip distribution on a pure 64 bit Intel PC with Windows7 Enterprise. Just unzip the thing and run the configure script. When I started the server I got among others a scary message like the following. I did a quick search and ended up that was some sort of a bug. On windows I followed the instructions as found here, and modified the JAVA_OPTIONS value to point on the specific x64 path for the native I/O lib paths.Interestingly enough, when I installed Weblogic on the same environment using the generic java (jar) installer I did not get this warning! The same applied when I installed Weblogic on MacOSX. It seems that the generic java installer does – put things properly in place. That is all for now….more to come I guess (thank god there still people willing to share tips and tricks on every dev problems. Kudos to the developer community). Reference: Small things around Oracle Weblogic 11g (10.3.4) from our JCG partner Paris Apostolopoulos at the Papo’s log blog....

Java Web Hosting Options Flowchart

One question I get asked a lot, is where and how to host your Java web application. It’s all fine to create it inside Eclipse with an embedded server, but how do you get it to the people? For a long time, there was no answer for enthusiast programmers. There were only expensive and way oversized options. Things have changed lately, but it’s still not an easy choice. Therefore I have created a small flowchart that will try to guide you in the maze.Feel free to submit additions, corrections, comments. I’ll keep updating the flowchart. If you liked this, why not share it with your friends? Reference: Java Web Hosting Options Flowchart from our JCG partner Peter Backx at the Streamhead blog....

Introduction to OSGi – Modular Java

OSGi Alliance is the governing body of this stranded and it was started at 1999. their initial goal was create open stranded for network devices. Based on this idea this specification introduced for Java also. Eclipse was first in Java. they introduced OSGi based Eclipse IDE at 2004 June. OSGi is way to define dynamic module in java. There are main three OSGi container implemented for Java,such as Apache Felix, Eclipse Equinox and Knopflefish. Why OSGi? Because OSGi provide ability to divided application in to multiple module and those module easy to manage with other dependencies. other than that is very easy to install, update,stop and delete module without stop engine(Ex: Tomcat web application container). We can have multiple version of implementation with effecting to other references. There are main 3 layers in web based Java framework(Presentation , Business layer and DAO layer). There we can divide it into three OSGi based module. then we can very easily fixed bug in one layer with out effecting to others and restarting our Web container. Just we need to update out module. in OSGi world output is bundle, it can be either Jar or War file. A bundle consists of Java classes and other resources that with some additional metadata (providing services and packages to other bundles). I am going to use Eclipse IDE for create my first bundle. Because Eclipse IDe has built in Equinox container(every eclipse plugins are OSGi bundles) . Create Eclipse Plug-In-ProjectGo to New–> Other –> Plug-In-Project and click on Next then new project creation dialog will be appeared Provide project name and Target platform as below. and Click on Next                 Project name     : com.chandana.Hello.HelloWorld                  Target platform : select Stranded OSGiIn next screen you can change bundle information(These information available in MANIFEST.MF  and I will give details information later) then click on Next button.After that OSGi project template selection dialog will be appear.There select  Hello OSGi Bundle and click on FinishAfter few second Eclipse will generate Hello World Plug-In-Project(I had Not responding few second :) ) In my project structure is like this:Activator.java package com.chandana.hello.helloworld; import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext; public class Activator implements BundleActivator { /* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext) */ public void start(BundleContext context) throws Exception { System.out.println("Hello World!!"); } /* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext) */ public void stop(BundleContext context) throws Exception { System.out.println("Goodbye World!!"); } }Activator is class which implement BundleActivator interface. It has stop and start methods. Those methods are called when a bundle is started or stopped. This bundle activator class specify in MENIFEST.MF file(Bundle-Activator entry). Start Method: The OSGi container calls start method when bundle is starting. We can use this start method for initialized database connection, register a service for other bundle use. Stop Method: The OSGi container calls stop method when bundle is stopping. We can use this method for remove services form service registry like clean up process MANIFEST.MF Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: HelloWorld Bundle-SymbolicName: com.chandana.Hello.HelloWorld Bundle-Version: 1.0.0.qualifier Bundle-Activator: com.chandana.hello.helloworld.Activator Bundle-Vendor: CHANDANA Bundle-RequiredExecutionEnvironment: JavaSE-1.6 Import-Package: org.osgi.framework;version="1.3.0"Bundle-ManifestVersion Bundle-ManifestVersion header show the OSGi container that this bundle follows the rules of the OSGi specification. A value of 2 means that the bundle is compliant with OSGi specification Release 4; a value of 1 means that it is compliant with Release 3 or earlier. Bundle-Name Bundle-Name header defines a short readable name for bundle. Bundle-SymbolicName Bundle-SymbolicName header specifies a unique name for the bundle. This is the name you will use while referring a given bundle from other bundles. Bundle-Version Bundle-Version header is the version number of the bundle. Bundle-Vendor Bundle-Vendor header is description of the vendor(foe example it’s my name). Import-Package Import-Package is indicate what are the other Java bundle(OSGi) required for this bundle. what we called dependency. Export-Package Export-Package is indicate what are public packages in bundle and those Export-Package can import from other bundle. Run the Bundle:For Run this project click on Run –> Run Configuration , In OSGi Framework Right click and create new Run Configuration.First unchecked the all target platform and Click on Add Required Bundles. After that Apply the changes and Run the project by click in Run button. After Run the project OSGi console display like below.OSGi Terminal Commands: start         – start the specified bundle(s) stop         – stop the specified bundle(s) uninstall    - uninstall the specified bundle(s) update      - update the specified bundle(s) refresh      - refresh the packages of the specified bundles b              - display details for the specified bundle(s) headers     – print bundle headers services     – display registered service detailsSource Code Next i will describe how to create dependency based OSGi bundle. An OSGi Service is a java object instance which is registered with OSGi framework with set of attributes. Services can be accessed via service registry(performed via the class BundleContext). BundleActivator is to be invoked on start and stop. When BundleActivator call start method we are going to register our service. After that any bundle can access that service. Service Bundle: In service bundle you need to export your service and need to register it via service registry. When we are exporting service we export interface package only. As usual that is to hide the implementation from the other bundles. I have created a sample OSGi project called HelloService MANIFEST.MF Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: HelloService Bundle-SymbolicName: com.chandana.hello.HelloService Bundle-Version: 1.0.0 Bundle-Activator: com.chandana.hello.helloservice.Activator Bundle-Vendor: CHANDANA Bundle-RequiredExecutionEnvironment: JavaSE-1.6 Import-Package: org.osgi.framework;version="1.3.0" Export-Package: com.chandana.hello.service Bundle-ActivationPolicy: lazyService Interface: public interface HelloService { public String helloMethods(); }Service Implementation: public class HelloServiceImpl implements HelloService { @Override public String helloMethods() { String retValue = "Inside Hello Service method"; return retValue; } }Boundle Activator: public class Activator implements BundleActivator { ServiceRegistration serviceRegistration; /* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext) */ public void start(BundleContext context) throws Exception { System.out.println("Bundle Started.....!!!!!"); HelloService service = new HelloServiceImpl(); serviceRegistration = context.registerService(HelloService.class.getName(), service,null); } /* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext) */ public void stop(BundleContext context) throws Exception { System.out.println("Bundle Stoped.....!!!!!"); serviceRegistration.unregister(); } }When we are using published services, we can import it from another Bundle. So need to create another Plug-In-Project for HelloClient Bundle Context Bundle context is the context of a single bundle within the OSGi runtime and it is created when Bundle get started. Bundle context can be used to Install new bundles, Obtain registered services by other bundles and Register services in the framework. MANIFEST.MF Import-Package: org.osgi.framework;version="1.3.0",com.chandana.hello.serviceAfter importing the bundle, you can access the service. Important thing is service can be accessed only through bundle context. You can get actual service object via BundleContext.getService() method . Activator class: public class Activator implements BundleActivator { ServiceReference serviceReference; /* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext) */ public void start(BundleContext context) throws Exception { serviceReference= context.getServiceReference(HelloService.class.getName()); HelloService helloService =(HelloService)context.getService(serviceReference); System.out.println(helloService.helloMethods()); } /* * (non-Javadoc) * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext) */ public void stop(BundleContext context) throws Exception { context.ungetService(serviceReference); } }context.getServiceReference() method return the HelloService OSGi service reference and Using that service reference can access the actual service object. For Run this project click on Run –> Run Configuration , In OSGi Framework Right click and create new Run Configuration. Make sure HelloService and HelloClient . Issues: What happened if the service is not started when client is accessing the service? What happened if you have stopped the service bundle? Code Repo: http://code.google.com/p/osgi-world/source/browse/#svn/trunk/com.chandana.hello.HelloService http://code.google.com/p/osgi-world/source/browse/#svn/trunk/com.chandana.hello.HelloClient Reference: Introduction to OSGi(Java Modular) & Introduction to OSGi – 2 (OSGi Services) from our JCG partner Chandana Napagoda at the Chandana Napagoda blog. ...

“Java Sucks” revisited

Overview An interesting document on Java’s short comings (from C developer’s perspective) was written some time ago (about 2000? ) but many of the arguments issues are as true (or not) today as they were ten years ago. The original Java Sucks posting. Review of short comings Java doesn’t have free(). The author lists this as a benefit and 99% of the time is a win. There are times when not having it is a downside, when you wish escape analysis would eliminate, recycle or free immediately an object you know isn’t needed any more (IMHO the JIT / javac should be able to work it out in theory) lexically scoped local functions The closest Java has is anonymous methods. This is a poor cousin to Closures (coming in Java 8), but it can be made to do the same thing. No macro system Many of the useful tricks you can do with macros, Java can do for you dynamically. Not needing a macro system is an asset because you don’t need to know when Java will give you the same optimisations. There is an application start up cost that macros don’t have and you can’t do the really obfuscated stuff, but this is probably a good thing. Explicitly Inlined functions The JIT can inline methods for you. Java can inline methods from shared libraries, even if they are updated dynamically. This does come at a run time cost, but its nicer not to need to worry about this IMHO. I find lack of function pointers a huge pain Function pointers makes in lining methods more difficult for the compiler. If you are using object orientated programming, I don’t believe you need these. For other situations, I believe Closures in Java 8 is likely to be nicer. The fact that static methods aren’t really class methods is pretty dumb I imagine most Java developers have come across this problem at some stage. IMHO: The nicest solution is to move the “static” functionality to its own class and not use static methods if you want polymorphism. It’s far from obvious how one hints that a method should be inlined, or otherwise go real fast Make it small and call it lots of times. ;) Two identical byte[] arrays aren’t equal and don’t hash the same I agree that its pretty ugly design choice not to make arrays proper objects. They inherit from Object, but don’t have useful implementation for toString, equals, hashCode, compareTo. clone() and getClass() are the most useful methods. You can use helper methods instead, but with many different helper classes called Array, Arrays, ArrayUtil, ArrayUtils in different packages its all a mess for a new developer to deal with. Hashtable/HashMap does allow you to provide a hashing function This is also a pain if you want to change the behaviour. IMHO, The best solution is to write a wrapper class which implements equals/hashCode, but this adds overhead. iterate the characters in a String without implicitly involving half a dozen method calls per character There is now String.toCharArray() but this creates a copy you don’t need and is not eliminated by escape analysis. When it is, this is the obvious solution. The same applies to “The other alternative is to convert the String to a byte[] first, and iterate the bytes, at the cost of creating lots of random garbage” overhead added by Unicode support in those cases where I’m sure that there are no non-ASCII characters. Java 6 has a solution to this which is -XX:+UseCompressedStrings. Unfortunately Java 7 has dropped support for this feature. I have no idea why as this option improves performance (as well as reducing memory usage) in test I have done. Interfaces seem a huge, cheesy copout for avoiding multiple inheritance; they really seem like they were grafted on as an afterthought. I prefer a contract which only lists functionality offered without adding implementation. The newer Virtual Extension Methods in Java 8 will provide default implementations without state. In some cases this will be very useful. There’s something kind of screwy going on with type promotion The problem here is solved by co-variant return types which Java 5.0+ now supports. You can’t write a function which expects and Object and give it a short Today you have auto-boxing. The author complains that Short and short are not the same thing. For efficiency purposes this can make surprisingly little difference in some cases with auto-boxing. In some cases it does make a big difference, and I don’t foresee Java optimising this transparently in the near future. :| it’s a total pain that one can’t iterate over the contents of an array without knowing intimate details about its contents Its rare you really need to do this IMHO. You can use Array.getLength(array) and Array.get(array, n) to handle a generic array. Its ugly but you can do it. Its one of the helper class which should really be methods on the array itself IMHO. The only way to handle overflow is to use BigInteger (and rewrite your code) Languages like Scala support operators for BigInteger and it has been suggested that Java should too. I believe overflow detection is also being considered for Java 8/9. I miss typedef This allows you to use primitives and still get type safety. IMHO, the real issue is that the JIT cannot detect that a type is just a wrapper for a primitive (or two) and eliminate the need for the wrapped class. This would provide the benefits of typedef without changing the syntax and make the code more Object Orientated. I think the available idioms for simulating enum and :keywords are fairly lame Java 5.0+ has enum which are first class objects and are surprising powerful. there’s no efficient way to implement `assert’ assert is now built in. To implement it yourself is made efficient by the JIT. (Probably not tens years ago) By having `new’ be the only possible interface to allocation, … there are a whole class of ancient, well-known optimizations that one just cannot perform. This should be performed by the JIT IMHO. Unfortunately, it rarely does, but this is improving. The finalization system is lame. Most people agree its best avoided. Perhaps it could be more powerful and reliable. ARM (Automatic Resource Management) may be the answer. Relatedly, there are no “weak pointers.” Java has always had weak, soft and phantom references, but I suspect this is not what is meant here. ?? You can’t close over anything but final variables in an inner class! There is true of anonymous inner classes, but not nested inner classes referring to fields. Closures might not have this restriction but its likely to be just as confusing. Being used to the requirement for final variables, I don’t find this problem esp. as my IDE will correct the code as required for me. The access model with respect to the mutability (or read-only-ness) of objects blows The main complaint appears to be that there are ways of treating final fields as mutable. This is required for de-serialization and dependency injectors. As long as you realise that you have two possible behaviours, one lower level than the other, it is far more useful than it is a problem. The language also should impose the contract that literal constants are immutable. Literal constants are immutable. It appears the author would like to expand what is considered a literal constant. It would be useful IMHO, to support const in the way C++ does. const is a keyword in Java and the ability to define immutable versions of classes without creating multiple implementations or read only wrappers would be more productive. The locking model is broken. The memory overhead of locking concern is really an implementation detail. Its up to the JVM to decide how large the header is and whether it can be locked. The other concern is that there is no control over who can obtain a lock. The common work around for this is to encapsulate your lock, which is what you would have to do in any case. In theory the lock can be optimised away. Currently this only happens when the whole object is optimised way. There is no way to signal without throwing For this, I use a listener pattern with an onError method. There is no support in the language for this, but I don’t see the need to. Doing foo.x should be defined to be equivalent to foo.x(), Perhaps foo.x => foo.getX() would be a better choice, rather like C# does. Compilers should be trivially able to inline zero-argument accessor methods to be inline object+offset loads. The JIT does this, rather than the compiler. This allows the calling code to be changed after the callee has been compiled. The notion of methods “belonging” to classes is lame. This is a “cool” feature which some languages support. In a more dynamic environment, this can look nicer. The down side is that you can piece of code for a class all over the place and you would have to have some way of managing duplicates in different libraries. e.g. library A defines a new printString() method and library B also defines a printString method for the same class. You would need to make each library see its own copy and have some way of determining which version library C would want when it calls this method. Libraries It comes with hash tables, but not qsort It comes with an “optimised merge sort” which is designed to be faster. String has length+24 bytes of overhead over byte[] That is without considering that each of the two objects are aligned to an 8 byte boundary (making it higher). If that sounds bad, consider that malloc can be 16-byte aligned with a minimum size of 32 bytes. If you use a shared_ptr to a byte[] (to give you similar resource management) it can be much larger in C++ than Java. The only reason for this overhead is so that String.substring() can return strings which share the same value array. This is not correct. The problem is that Java doesn’t support variable sized objects (apart from arrays). This means that String object is a fixed size and to have variable sized field, you have to have another object. Its not great either way. ;) String.substring can be a source of “memory leak” You have to know to take an explicit copy of you are going to retain a substring of a larger string. This is ugly, however the benefits usually out weight the down side. What would be a better solution is to be able to optimise the code so that a defensive copy was taken by default, except when the defensive copy is not needed (it is optimised away) The file manipulation primitives are inadequate The file system information has been improved in Java 7. I don’t think these options are available, but can be easily inferred if you need to know this. here is no robust way to ask “am I running on Windows” or “am I running on Unix.’ There are System properties os.name, os.arch, os.version which have always been there. There is no way to access link() on Unix, which is the only reliable way to implement file locking. This was added in Java 7 Creating a Hard Link There is no way to do ftruncate(), except by copying and renaming the whole file. You can use RandomAccessFile.truncate(). Adding in Java 1.4. Is “%10s %03d” really too much to ask? It was added in Java 5.0 A RandomAccessFile cannot be used as a FileInputStream or FileOutputStreamRandomAccessFile  supports DataInput and DataOutput, FileInputStream and FileOutputStream can be wrapped in DataInputStream and DataOutputStream. They can be made to support the same interfaces. I have never come across a situation where I would want to use both classes in a single method. markSupported is stupid True. There are a number of stupid methods which are only there for historical purposes. Another being Object.wait(millis, nanos) on every object (even arrays) and yet the nanos is never really used. What in the world is the difference between System and Runtime? I agree it appears arbitrary and in some cases doubled up. System.gc() actually calls Runtime.getRuntime().gc() and yet is called System GC even in internal code. In hind site they should really be one class with monitoring functionality moved to JMX. What in the world is application-level crap like checkPrintJobAccess() doing in the base language class library So your SecurityManager can control whether you can perform printing. (Without having to have an Application level Security Manager as well) Not sure is this really prevents the need to have Application level security. ;) Reference: “Java Sucks” revisited from our JCG partner Peter Lawrey at the Vanilla Java blog....

Best Of The Week – 2012 – W04

Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew Java Code Geeks attention: * Java Anti-Patterns: A comprehensive lists of Java programming anti-patterns. Also check out our Java Best Practices series while you are at it. * Time Management: 6 Ways to Improve Your Productivity: A nice article providing suggestions on how to improve one’s productivity including elimination of distractions, being prepared for bonus time, knowing when you are done with a task etc. * Solving OutOfMemoryError (part 5) – JDK Tools: This article discusses the tools bundled with the JDK that can help us troubleshoot OutOfMemoryError problems in productions machines. It provides examples for the jps, jmap and jhat command line tools. Also check out Monitoring OpenJDK from the CLI and Profile your applications with Java VisualVM. * JSON Parsing in android: Short tutorial on how to perform JSON parsing in Android using the native SDK. Also check out Android JSON Parsing with Gson Tutorial for a more efficient and robust way. * 25 Best Free Eclipse Plug-ins for Java Developer to be Productive: A comprehensive list of the best free Eclipse plug-ins that can boost your productivity including hits like FindBugs, Checkstyle, PMD, M2eclipse, Subclipse, EGit, Spring Tool Suite, JbossTools and others. Also check out Eclipse Shortcuts for Increased Productivity. * Submitting Your Application to the Android Market: A full blown, step by step guide on how to submit an Android application to the Android market. Check out our “Android Full Application Tutorial” series in order to find out how to build one. * Coding for success: Amazing article discussing technical education and explaining why tomorrow’s children should be educated on writing code. Some lines from the article: “Learning to code is learning to use logic and reason”, “Code is simply the tool for automating the boring stuff”. Awesome… * Automated Acceptance-Testing using Concordion: This article discusses Concordion and the interesting approach it takes on automated acceptance testing. A simple example of how to use it is also provided. Check out 7 mistakes of software testing on the overall subject of testing. * Load Balancing With Apache Tomcat: Simple and straightforward tutorial on how to implement load balancing with Tomcat using Apache web server and mod_jk. Also see Multiple Tomcat Instances on Single Machine. * Java development 2.0: Securing Java application data for cloud computing: This tutorial shows how to use private-key encryption and the Advanced Encryption Standard to secure sensitive application data for the cloud. Additionally, an encryption strategy is provided, which is important for maximizing the efficiency of conditional searches on distributed cloud datastores. Also check out Developing and Testing in the Cloud. That’s all for this week. Stay tuned for more, here at Java Code Geeks. Cheers, Ilias Tsagklis...

Blind spot of software development methodologies

There is a trend of rise and fall of different software development methodologies. There is also a lot of discussion and excitement about which is better Agile or Waterfall or whatever, and what is Scrum really. My impression is that there is a trend of accepting processes and practices, with expectation that there will be always better results and fewer problems, which is not neccessary nor feasible. Although I can see that some methodologies can have certain advantage over another when applied to a concrete software project + team + company, there is something missing. There are parts of software development which also can affect the success of a project or a team, or a company, but is not a methodology matter! I would like to think aloud about these simple things which somehow are underestimated, and are still very important: Plain competence You cannot have enough of this! Is it possible that you are oversteering our projects because your team is not competent enough? Just think about this: When was the last time anybody from your team picked up a technical book related to your project? Having a competent team will result in team members going for it, instead of looking for excuses. Common sense team workflow Does it make sense that whole team attends a meeting where most of the time couple of people have a discussion about how to implement something? Saying it is a scrum thing will not make it better, it is still a waste of time. I’m not saying that meetings are always bad, my point is that you should think about it if it works for your team. My suggestion is to let team decide on the workflow as much as possible, have them included. Also, having a process of “their own” can have benefits to team morale. Every team is unique My experience is that putting a group of people as a team will always produce results and processes which are unique to this team. If you force some sort of process onto them, sometimes you will get partial results, because team tends to work exactly the same as before, with additional overhead of being “compatible” with given process. Even if there is benefit, there is inertia to accept something “just because”. Team should have freedom to measure and accept practices which are working for them, and reject the ones which don’t. As a conclusion I would ask a question: What other things in software development process do you think are important? What experiences from other teams can be applied to your team, and what certainly cannot because you are too different? Reference: Blind spot of software development methodologies from our JCG partner Nenad Sabo at the Software thoughts blog....

Storing hierarchical data in MongoDB

Continuing NoSQL journey with MongoDB, I would like to touch one specific use case which comes up very often: storing hierarchical document relations. MongoDB is awesome document data store but what if documents have parent-child relationships? Can we effectively store and query such document hierarchies? The answer, for sure, is yes, we can. MongoDB has several recommendations how to store Trees in MongoDB. The one solution described there as well and quite widely used is using materialized path. Let me explain how it works by providing very simple examples. As in previous posts, we will build Spring application using recently released version 1.0 of Spring Data MongoDB project. Our POM file contains very basic dependencies, nothing more.4.0.0mongodb com.example.spring 0.0.1-SNAPSHOT jar UTF-8 3.0.7.RELEASE org.springframework.data spring-data-mongodb 1.0.0.RELEASE org.springframework spring-beans org.springframework spring-expression cglib cglib-nodep 2.2 log4j log4j 1.2.16 org.mongodb mongo-java-driver 2.7.2 org.springframework spring-core ${spring.version} org.springframework spring-context ${spring.version} org.springframework spring-context-support ${spring.version} org.apache.maven.plugins maven-compiler-plugin 2.3.2 1.6 1.6To properly configure Spring context, I will use configuration approach utilizing Java classes. I am more and more advocating to use this style as it provides strong typed configuration and most of the mistakes could be caught on compilation time, no need to inspect your XML files anymore. Here how it looks like: package com.example.mongodb.hierarchical;import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.mongodb.core.MongoFactoryBean; import org.springframework.data.mongodb.core.MongoTemplate; import org.springframework.data.mongodb.core.SimpleMongoDbFactory;@Configuration public class AppConfig { @Bean public MongoFactoryBean mongo() { final MongoFactoryBean factory = new MongoFactoryBean(); factory.setHost( "localhost" ); return factory; }@Bean public SimpleMongoDbFactory mongoDbFactory() throws Exception{ return new SimpleMongoDbFactory( mongo().getObject(), "hierarchical" ); }@Bean public MongoTemplate mongoTemplate() throws Exception { return new MongoTemplate( mongoDbFactory() ); }@Bean public IDocumentHierarchyService documentHierarchyService() throws Exception { return new DocumentHierarchyService( mongoTemplate() ); } }That’s pretty nice and clear. Thanks, Spring guys! Now, all boilerplate stuff is ready. Let’s move to interesting part: documents. Our database will contain ‘documents’ collection which stores documents of type SimpleDocument. We describe this using Spring Data MongoDB annotations for SimpleDocument POJO. package com.example.mongodb.hierarchical;import java.util.Collection; import java.util.HashSet;import org.springframework.data.annotation.Id; import org.springframework.data.annotation.Transient; import org.springframework.data.mongodb.core.mapping.Document; import org.springframework.data.mongodb.core.mapping.Field;@Document( collection = "documents" ) public class SimpleDocument { public static final String PATH_SEPARATOR = ".";@Id private String id; @Field private String name; @Field private String path;// We won't store this collection as part of document but will build it on demand @Transient private Collection< SimpleDocument > documents = new HashSet< SimpleDocument >();public SimpleDocument() { }public SimpleDocument( final String id, final String name ) { this.id = id; this.name = name; this.path = id; }public SimpleDocument( final String id, final String name, final SimpleDocument parent ) { this( id, name ); this.path = parent.getPath() + PATH_SEPARATOR + id; }public String getId() { return id; }public void setId(String id) { this.id = id; }public String getName() { return name; }public void setName(String name) { this.name = name; }public String getPath() { return path; }public void setPath(String path) { this.path = path; }public Collection< SimpleDocument > getDocuments() { return documents; } } Let me explain few things here. First, magic property path: this is a key to construct and query through our hierarchy. Path contains identifiers of all document’s parents, usually divided by some kind of separator, in our case just . (dot). Storing document hierarchical relationships in this way allows quickly build hierarchy, search and navigate. Second, notice transient documents collection: this non-persistent collection is constructed by persistent provider and contains all descendant documents (which, in case, also contain own descendants). Let see it in action by looking into find method implementation: package com.example.mongodb.hierarchical;import java.util.Arrays; import java.util.Collection; import java.util.HashMap; import java.util.Map;import org.springframework.data.mongodb.core.MongoOperations; import org.springframework.data.mongodb.core.query.Criteria; import org.springframework.data.mongodb.core.query.Query;public class DocumentHierarchyService { private MongoOperations template;public DocumentHierarchyService( final MongoOperations template ) { this.template = template; }@Override public SimpleDocument find( final String id ) { final SimpleDocument document = template.findOne( Query.query( new Criteria( "id" ).is( id ) ), SimpleDocument.class );if( document == null ) { return document; }return build( document, template.find( Query.query( new Criteria( "path" ).regex( "^" + id + "[.]" ) ), SimpleDocument.class ) ); }private SimpleDocument build( final SimpleDocument root, final Collection< SimpleDocument > documents ) { final Map< String, SimpleDocument > map = new HashMap< String, SimpleDocument >();for( final SimpleDocument document: documents ) { map.put( document.getPath(), document ); }for( final SimpleDocument document: documents ) { map.put( document.getPath(), document );final String path = document .getPath() .substring( 0, document.getPath().lastIndexOf( SimpleDocument.PATH_SEPARATOR ) );if( path.equals( root.getPath() ) ) { root.getDocuments().add( document ); } else { final SimpleDocument parent = map.get( path ); if( parent != null ) { parent.getDocuments().add( document ); } } }return root; } }As you can see, to get single document with a whole hierarchy we need to run just two queries (but more optimal algorithm could reduce it to just one single query). Here is a sample hierarchy and the the result of reading root document from MongoDB template.dropCollection( SimpleDocument.class );final SimpleDocument parent = new SimpleDocument( "1", "Parent 1" ); final SimpleDocument child1 = new SimpleDocument( "2", "Child 1.1", parent ); final SimpleDocument child11 = new SimpleDocument( "3", "Child 1.1.1", child1 ); final SimpleDocument child12 = new SimpleDocument( "4", "Child 1.1.2", child1 ); final SimpleDocument child121 = new SimpleDocument( "5", "Child", child12 ); final SimpleDocument child13 = new SimpleDocument( "6", "Child 1.1.3", child1 ); final SimpleDocument child2 = new SimpleDocument( "7", "Child 1.2", parent );template.insertAll( Arrays.asList( parent, child1, child11, child12, child121, child13, child2 ) );...final ApplicationContext context = new AnnotationConfigApplicationContext( AppConfig.class ); final IDocumentHierarchyService service = context.getBean( IDocumentHierarchyService.class );final SimpleDocument document = service.find( "1" ); // Printing document show following hierarchy: // // Parent 1 // |-- Child 1.1 // |-- Child 1.1.1 // |-- Child 1.1.3 // |-- Child 1.1.2 // |-- Child // |-- Child 1.2That’s it. Simple a powerful concept. Sure, adding index on a path property will speed up query significantly. There are a plenty of improvements and optimizations but basic idea should be clear now. Reference: Storing hierarchical data in MongoDB from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog....

The Rise of the Front End Developers

In any web development company, there exists two different worlds; well there are more, but we’ll just focus on – front end (designers) & back end (developers) The Front end guys are responsible for making something that is visible to the end users (THE LOOK). The back end guys are responsible for making the front end work (THE FUNCTIONALITY). Together, they both deliver a complete web application/site. The back end developers would typically use programming languages, such as Java/C++/Python. Apart from talking to database and processing requests, they even have an arsenal of libraries to generate the site markup (JSPs, server side templates, etc). Front end guys usually fill in by writing HTML documents and CSS files (merely a writer) to present this markup in an visually pleasing way and back end just take these templates to populate data. Front end had only one option to do any logical operations, by using JavaScript - which has been used for a long time just to validate forms (and do some freaky stuffs). Because of this cultural difference, there’s always been a ego-war between these two worlds. Even the company management would rate the front end guys par below the back end developers because the front ends guys don’t do any serious programming. All was going fine until the web2.0 era. Now, the front end realized that they could use JavaScript to do much more cooler stuffs than just the form validation. The development of high speed JavaScript engines (such as V8) made it possible to run complex JavaScript code right in the browser. With the introduction of technologies such as WebGL and Canvas, even graphics rendering became feasible using JavaScript. But, this didn’t change anything on the server side; the server programs were still running on JVMs/Rubys/Pythons. Fast forward to today: The scenario is dramatically changing. JavaScript has just sneaked its way into the servers. Now, it is no longer required that a web application needs to have a back end programming language such as Java/C++. Everything can be done using just JavaScript. Thanks to node.js which made it possible to run the JavaScript on the server side. Using MongoDB, one can replace the need to have SQL code and now store JSON documents using JavaScript MongoDB connectors. The JavaScript template libraries such as {{Mustache}}/Underscore almost removed the need to have server side templates (JSPs). On the client side, JavaScript MVC frameworks such as Backbone.JS enable us to write maintainable code. And, there’s always the plain old JavaScript waiting for us to write some form validation script. With that, now it is possible to do the heavy lifting just by using JavaScript. The front end JavaScript programmers no longer need to focus on just the front end. They can use their skill set to develop the web application end-to-end. This rise of the front end developers poses a real threat to the survival of back end developers. If you are one of that back end guy, do you already realize this threat? What’s your game plan to stay fit to survive this challenge? Reference: The Rise of the Front End Developers from our JCG partner Veera Sundar at the Veera Sundar blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: