Featured FREE Whitepapers

What's New Here?

netbeans-logo

NetBeans Usability Tips

Java IDEs have come a long way since the days of JBuilder (though JBuilder seemed like a welcome advance at the time). Today’s Java IDEs (such as NetBeans, Eclipse, IntelliJ IDEA, and JDeveloper) are very advanced tools that most Java developers embrace for writing significant Java code. As advanced as these IDEs are, they all still have their own quirks and each seems better and easier to use when one understands some key tips (or “tricks”) to using that IDE more efficiently. In this post, I look at some tips I have found useful when using NetBeans. Disabling Automatic ScanningA problem that can be especially onerous when using NetBeans on a large code base with many related projects open is the occasionally too-frequent automatic scanning that NetBeans performs. This is supposed to only occur intermittently and its intention is good, but sometimes the intended behavior’s value is worth less than the performance-degrading cost justifies. Fortunately, this option can be disabled when its cost is greater than its benefit. In the NetBeans for PHP blog post Enable auto-scanning of sources – Scan for External Changes, Petr Pisl covers how to do this in NetBeans 6.9.This feature is also supported in NetBeans 7.1 as shown in the following screen snapshot (window shown is accessible by selecting Tools ? Options ? Miscellaneous ? Files). Controlling Level of NetBeans HintsNetBeans’s Java hints can aid the Java developer in improving and modernizing his or her Java code. The hints cover topics as diverse as performance, safety, conciseness, coding standards, likely bugs, latest JDK standards, and best practices. I do not cover these useful hints in more detail here because I’ve already covered them in multiple previous posts. I introduced NetBeans hints and how to enable them , configure them as warnings or errors, and introduced seven of the most important hints in my blog post Seven Indispensable NetBeans Java Hints. In the blog post Seven NetBeans Hints for Modernizing Java Code, I discussed seven more hints that are useful for bridging legacy Java code forward to use the best features of newer SDKs (J2SE 5, Java SE 6, and Java SE 7). My post Creating a NetBeans 7.1 Custom Hint demonstrates writing custom hints to further expand NetBeans hinting capability beyond the out-of-the-box hints.Setting Source/Target JDK AppropriatelyIn the blog post Specifying Appropriate NetBeans JDK Source Release, I looked at several advantages of setting the JDK level for the NetBeans projects’ source/target JDKs appropriately. This can make a major difference for developers using JDK 7 as it helps the hints covered in the previous tip to show areas where pre-JDK 7 code can be migrated to JDK 7 constructs. However, even developers using JDK 6 or JDK 5 can find value to having this set appropriately. The appropriate setting not only advertises features that are available, but it also prevents developers from mistakenly using newer versions when they are not yet available in the actual version of code the developer should be using. NetBeans will warn the developer that certain features are not available for that JDK setting, so it is important to have it set properly.Leveraging NetBeans Keyboard CommandsWhether it’s vi, emacs, Eclipse, NetBeans, or any other editor, the masters of the respective editors know and frequently use keyboard commands to get work done quickly. NetBeans offers so many keyboard-based commands that it’s difficult to summarize them. However, some good starting points include Highlights of NetBeans IDE 7.0 Keyboard Shortcuts and Code Templates, NetBeans Tips and Tricks, Keyboard Shortcuts I Use All the Time, NetBeans IDE Keyboard Shortcuts, and NetBeans Shortcut Keys. NetBeans even supports Eclipse key bindings!Hiding Clutter and Noise with Code FoldingMy preference is to have as clean of code as possible. Sometimes, however, I am forced to deal with code that has a lot of unimportant junk or noise in it. In such cases, NetBeans’s code folding support is welcome because I can hide that noise. It would obviously better if I could remove the unnecessary noise and code folding can be abused, but I am appreciative of the feature when it’s my only option for reducing clutter and noise so that I can focus on what matters. I discussed NetBeans code folding in further detail in the post NetBeans Code Folding and the Case for Code Folding.Other NetBeans TipsThere are numerous other useful NetBeans tips available online.Roman Strobl’s NetBeans Quick TipsIn the blog he maintained while working at Sun Microsystems, Roman Strobl wrote several “NetBeans Quick Tip” posts (although dated [mid-2000s], several of these are still applicable):NetBeans Quick Tip #1 – Setting Target JDK. NetBeans Quick Tip #2 – Generating Getters and Setters NetBeans Quick Tip #3 – Increasing Font Size NetBeans Quick Tip #4 – Extending the Build Process NetBeans Quick Tip #5 – EOL Sweeper NetBeans Quick Tip #6 – Abbreviations in Editor Quick Tip #7 – Macros in Editor NetBeans Quick Tip #8 – Using Custom Folds Quick Tip #9 – Better Responsivenes of Error Marks and Hints NetBeans Quick Tip #10 – Diffing Two Files NetBeans Quick Tip #11 – How to Save As… Netbeans Quick Tip #12 – Fast Navigation to Methods and Fields NetBeans Quick Tip #13 – Define a Shortcut for Ant Target NetBeans Quick Tip #14 – Accessing Files Outside Projects NetBeans Quick Tip #15 – Adding Multiple Components with Matisse NetBeans Quick Tip #16 – Using Dependent Projects NetBeans Quick Tip #17 – Faster Building of Projects with Dependencies NetBeans Quick Tip #18: What to Do when Things Go Wrong? NetBeans Quick Tip #19 – Positioning without Guidelines in Matisse NetBeans Quick Tip #20 – Killing Processes NetBeans Quick Tip #21 – Achieving Same Size NetBeans Quick Tip #22 – Using Matisse’s Connection Manager NetBeans Quick Tip #23 – Changing Code in Blue Guarded Blocks NetBeans Quick Tip #24 – Correct Javadoc NetBeans Quick Tip #25 – Case Insensitive Code Completion NetBeans Quick Tip #26 – Short Package Names NetBeans Quick Tip #27 – Implementing Abstract Methods NetBeans Quick Tip #28 – Configuring Derby Database in NetBeans 5.0 NetBeans Quick Tip #29 – Monitoring HTTP Communication NetBeans Quick Tip #30 – When GroupLayout Fails NetBeans Quick Tip #31 – Changing the Look and Feel NetBeans Quick Tip #32 – Faster and More Stable Ruby Support NetBeans Quick Tip #33 – Show Error Using Keyboard Keyboard Shortcuts I Use All the TimeOther Posts on NetBeans TipsNetBeans Community Docs- Tips And Tricks Dumb Coder NetBeans Tips Gephi NetBeans Tips Tips and Tricks of NetBeans Netbeans Quick Tip: How to use tabs not spaces Netbeans Tips and Tricks Can I Diff Two Files Outside Version Control?Your Favorite NetBeans Tip or Trick?What is your favorite NetBeans tip or trick?Reference: NetBeans Usability Tips from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
java-interview-questions-answers

Test Driven Development – A Win-Win strategy

Agile practitioners talk about Test Driven Development(TDD), so do lot of developers who care about their code quality and workability. And I once upon a time, not so long ago managed to read about TDD. The crux of TDD as I have understood is:Write Test, and fail Code, make the tests succeed Automate the tests Refactor the code to improve the quality RepeatPretty easy to understand. A annoyed developer shouts- “A developer writing tests? How can you expect us to develop and test and yet finish the feature in time?”. After all developers dont want to do the boring testing work. I have been a developer for around 2 years know and there were times when I reacted that way, during those initial days. But with time, I have started to understand the crux of software development. And this time around I thought of trying out TDD.My work involves wiring up the data in the db with the UI using an Java EE web framework- A typical web application work. Let me explain my testing strategy before I adopted TDD:Write the complete code which includes- PLSQL procedures, Java code to invoke PLSQL procedures, Java code for the UI bindings and the JSP page itself. Manually test the functioning of the db layer and the UI layer code. It involves navigating to the page and then testing various operations. In this case both the UI issues and the Backend code issues would crop up. As I would play around with the UI further, I would unearth few bugs in the code, otherwise write a selenium test to automate testing of a few use-cases.With the above 3 steps, I spent a lot of time-waiting for the backend code to compile and the restart the server for the UI to reflect the changes. Even if its a simple 1 word/1 statement change I had to wait for approximately upto 5 minutes and in some cases 8 minutes. While I would wait for this to restart, I would have lost focus to some other task and there by take sometime to come back to the main task. trying to debug and find out if the exception/bug is due to the UI code issue or the backend code issue. in waiting for the pages to load and navigate through the pages to the right page.Ok, those were the pre-historic times. Now coming to the Modern Age. I thought TDD would not have been possible in the kind of work I do, it was because I wrote a badly coupled backend and UI code. I couldn’t think of ways to test my back end code independently and then move to the UI code and then test it via selenium tests. Keeping aside these notions, I gave it a shot. I know I wasn’t very close to the actual TDD, but I felt somewhat closer.I had a fair idea of how to implement the logic, created a basic implementation and let it compile successfully. Created a few data population tests to get the kind of data to be used for the testing. Created JUnits to tests the basic functionality. Mostly in terms of the correct execution of PLSQL procedure via the Java API. Updated the JUnits to add more tests to test out the actual functionality required and updated the code to implement those functionalities. Refactor the code to remove bad smells and then run JUnits to see to it that nothing is broken.The reasons why I felt excited, why I felt it was a Win-Win strategy:I began to think in terms for the user of the API more than its creator. This kept me away from adding hacks which could fix the issue but would be difficult to test. This tremendously improved the code structure then what I had written before. No server restarts, no wasting of ~ 8 minutes per restart, no wasting of navigating to the pages. I just had to edit the code, run the junits and see the tests decide the fate. This is more useful for the backend code I have written. No loss in focus as I am deeply involved in Code-Test cycles. Sense of achievement as I see the tests show up green bars. Possibility of creating a code with good unit tests to test the backend features, which also helps in refactoring the code more easily.Now I just have to write the glue code for the UI and the backend and test the glue code via the selenium tests. Anyone had any similar experiences when they started to use TDD? Reference: My First steps in Test Driven Development- A Win-Win strategy from our JCG partner Mohamed Sanaulla at the Experiences Unlimited blog....
android-logo

Android Emulator: Scale size without using Eclipse

As you might have read following my previous post I’m currently experimenting with mobile web solutions. In such a case, having an Android emulator is quite comfortable for quickly testing your solution.Personally though, I hate the default emulator skin, mainly also because it takes up quite a large portion of your screen. Hence, I normally use a custom emulator skin. But there is the problem of the scaling the size properly as the default settings will most probably go out of your screen.A couple of years ago I wrote a blog post on how to tune your Android emulator with a custom skin. In that post, I also described on how to scale it properly, by adding the -scale option to the command line arguments from within the Eclipse Run Configuration dialog.However, if you’re only interested in the emulator itself, you may start it by launching the AVD manager directly. In that case, the scale parameter can be directly set when launching your AVD:Btw, should you have problems connecting to the Internet from your emulator, it might be due to the proxy settings of your (company) network. Have a look at this post then, it might help you.Reference: Scaling Android Emulator Size: Without using Eclipse from our JCG partner Juri Strumpflohner at the Juri Strumpflohner’s TechBlog blog....
eclipse-logo

Complete Guide To Deploy Java Web Application in Amazon Ec2 using Eclipse

Hi readers, Today I’m going to show you how to deploy simple java web application in amazon ec2 using Eclipse IDE.Before we begin we need some required things,Eclipse Java EE IDE – You can download in http://www.eclipse.org/downloads/ (Im using Indigo version) Amazon Ec2 account – http://aws.amazon.com/ec2/ (Free account is enough) Some basic understanding about java web applicationOK, lets start, here we go….Step 1First you have to install the AWS toolkit for eclipse plugin. Simply go to Help–> Eclipse Market Place –> Search Amazon You will get the AWS toolkit for eclipse . Click Install It will show the corresponding packages and after agree the license will install it. Just a simple procedure.Step 2Windows –> Preferences –> select AWS Toolkit and Fill the fields according to your amazon ec2 account Give your name to Account Name (Not Necessary), but give exact value to Access Key ID and Secret Access Key according to your amazon acc. In the optional configuration (Expand it) give account id (exact value) to that field.Step 3Windows –> Show view –> Other –> Select AWS toolkit view (Select all views) –> OK Now you can see the perspective view of aws.Step 4Now the configuration part is over. Lets create a Amazon ec2 server. To do that File –> New –> Other –> Server –> server –> select amazon ec2 or amazon elastic server tomcat version 6 (you can either choose tomcat version 7) Fill the fields and Finish. (Hope you have some basic understand about eclipse)                                       Step 5Now create a Dynamic Web project. File –> New –> Other -> Web –> Dynamic Web Project Give whatever name you like. Choose AWS Ec2 or Elastic runtime as a target runtime. Click FinishStep 6Now create a servlet and Finish Change the doGet() in servlet (Only for demo. This is where your code goesssss)Step 7Right click Servlet –> Run as –> Run On server –> Select the server you above create and Finish.Step 8Eclipse will automatically open the web page that you created. (Or in server tab right click the server and select amazon web service –> running)Thats it….. Just a simple procedure but you can expand this in your war… its up to you.Reference: Complete Guide To Deploy Java Web Application in Amazon Ec2 using Eclipse from our JCG partner Rajith Delantha at the Looping around with Rajith… blog....
software-development-2-logo

Code comments gone wrong

Adding code comments is supposed to be good practice, but here is why it often fails:Code is the single authoritative source of truth in a program! There is no way to ensure that code comments are correct at all times (not always updated as code changes). Comments are written in human language which can be prone to misinterpretation.First put your good intentions into writing simple and readable code.Write self descriptive code ! Your code should be read like sentences. Avoid smart shortcuts and tricks because they break the reading. Expect the reader to have solid programming knowledge but no knowledge about the purpose of your code. If code is too compact add extra code steps to document it, for example: ... final Person dummyPerson = new Person("Joe", "Bloggs"); return dummyPerson;Instead of using a comment: ... // Return dummy person. return new Person("Joe", "Blogs");Ok ok, this example was a bit silly but you got the idea.Using long names is considered bad practice, I disagree. Prefer using long explicit names over short meaningless names which require code comments. Sometimes long names are really annoying, for example when they keep appearing everywhere in some algorithm, in that case you could use a comment.Consider using more columns:The default max of 80 columns is terrible, use a wide screen and use 120 columns or more, the code will be more readable because long lines will not wrap anymore and you can use longer more explicit names.Use assertions to document pre and post conditions instead of lengthy comments. public List<String> listFiles(final String folderUrl) { assert folderUrl!= null; assert folderUrl.endsWith("/"); ... }If you write an API a good documentation is necessary but for internal code I think comments should not replace good naming and code clarity. I use code comments when the code is not really self documenting. Comments should convey what code cannot. They should explain the reasons for a specific design decision, they should explain what code is supposed to achieve and why.Learn how to use the Javadoc, it not only looks better, it can also help automatically update some documentation. When referring to code try using the link tag. Your IDE may automatically update the linked method and class names during renaming which ensures that some of your documentation stays up to date. /** * Use the link tag: {@link SomeClass#someMethod} */Links:http://www.oracle.com/technetwork/java/javase/documentation/index-137868.html http://stackoverflow.com/questions/398546/technical-tips-for-writing-great-javadoc http://www.javaspecialists.eu/archive/Issue039.htmlReference: Code comments gone wrong from our JCG partner Christophe Roussy at the Javarizon blog....
software-development-2-logo

Top 7 Programmers bad habits

1. The All code is crap, except mine, attitude.I have bad news for you buddy, all code is crap. No matter how much effort you put on it, there is always a majority of programmers who are going to think that your code sucks and that they could have done it 10 times better. I have already covered this topic in previous posts, you can find more information of what exactly I mean when I say that all the code is crap here and here. How to fix it : Don’t criticise others people code, it could be yours the one in the spotlight, try to make objective and professional observations instead, but don’t judge. Be humble and try to learn from everyone around you, hopefully then your code won’t be so bad. 2. The “I fix that in a second” catastrophe. Taking shortcuts is very tempting, everyone has done it. There are actually situations where they are necessary, but in overall, they are dangerous, very dangerous and should be avoided. A shortcut which goes wrong may save you a few hours, but may cause months of pain. How to fix it : Don’t trust yourself when carrying delicate activities. Ask someone else to review what you are doing. Make sure that if you are about to take a shortcut, you make very clear to the stakeholders what the reasons and the risks are. Try to get a manager to sign off every time you are about to take a shortcut. 3. The “That will only take a second” misconception. Being Barcelona my hometown, I am very proud of the Sagrada Familia Cathedral, which is very well know for its beautiness, and also for the time is estimated it will take to complete, (in construction since 1882), but that’s probably because they didn’t ask a programmer to estimate, otherwise the estimate would probably have been somewhere around 2 weeks. How to fix it : For starters, is important to understand that accurate estimations in software development for non trivial solutions are impossible, we can only guess. Also remember that is very likely that you will find so many things which you didn’t foresee when you start developing that is worth multiplying the estimate to cover for those, I usually go with 1.5 or 2. 4. The ego spiral Many programmers discussions look more like rooster fights than human discussions. This usually happens in design and architectural meetings. It is actually quite easy to detect this ego spirals, you just have to substitute most of what the contenders are saying with COC! COC! COCOCOOCCC! COOC! How to fix it: Leave your ego at home. Big egos are one of the biggest non technical issues for any programmer. Keep in mind some basic considerations when making decisions. 5. “It wasn’t me!” In my opinion, other bad habit from most programmers is the lack of accountability. We always have an excuse… It’s like if we were saying that under normal conditions we would never make a mistake, which honestly is quite hard to believe. How to fix it: No need to cry, or to perform seppuku, (aka harakiri), when we make a mistake. Having a healthy attitude where we can you just say something like: “yeah, sorry, now we need to do this to fix this issue, my fault” is a very professional attitude, and it will help you to build a reputation and to be better considered by your colleagues. 6. The demotivated genius. Repetitive and simple tasks are not the best motivators, but they need to be done, programmers tend to get demotivated and very unproductive when they need to complete them. How to fix it: Discipline. Unfortunately, there isn’t any other remedy I can think of. 7. The premature programmer. If programming was sex, there would be a lot of unsatisfied computers. You can just not go in, do things halfway through and then fall sleep. One of the concepts that I find most programmers struggling with is the concept of “Done”. Remember that done means: tested (and not only unit tested), documented, committed, merged… How to fix it: This one is tricky, the complexity of non apparent necessary tasks to fully complete some functionality is quite high and usually requires discipline and training. Probably the two easiest ways to help a programmer understand if some code is done are peer reviews and demos. Reference: Top 7 Programmers bad habits from our JCG partner Rajith Delantha at the Looping around with Rajith… blog....
play-framework-logo

Play 2 – modules, plugins, what’s the difference?

There seems to be some confusion regarding Play 2 modules and plugins. I imagine this is because the two are often synonymous. In Play (both versions – 1 and 2) there are distinct differences. In this post, I’m going to look at what a plugin is, how to implement one in Java and Scala, and how to import plugins from modules. Plugins A Play 2 plugin is a class that extends the Java class play.Plugin or has the Scala trait play.api.Plugin. This class may be something you have written in your own application, or it may be a plugin from a module. Writing a plugin in Java Create new class, and have it extend play.Plugin. There are three methods available to override – onStart(), onStop() and enabled(). You can also add a constructor that takes a play.Application argument. To have some functionality occur when the application starts, override onStart(). To have functionality occur when the application stops, override onStop(). It’s that simple! Here’s an example implementation which doesn’t override enabled(). package be.objectify.example;import play.Application; import play.Configuration; import play.Logger; import play.Plugin;/** * An example Play 2 plugin written in Java. */ public class MyExamplePlugin extends Plugin { private final Application application;public MyExamplePlugin(Application application) { this.application = application; }@Override public void onStart() { Configuration configuration = application.configuration(); // you can now access the application.conf settings, including any custom ones you have added Logger.info("MyExamplePlugin has started"); }@Override public void onStop() { // you may want to tidy up resources here Logger.info("MyExamplePlugin has stopped"); } }Writing a plugin in Scala Create a new Scala class, and have it extends play.api.Plugin. Just as in the Java version, there are onStart(), onStop() and enabled() methods along with an play.api.Application constructor argument. Here’s the Scala implementation: package be.objectify.exampleimport play.api.{Logger, Application, Plugin}/** * An example Play 2 plugin written in Scala. */ class MyExamplePlugin(application: Application) extends Plugin { override def onStart() { val configuration = application.configuration; // you can now access the application.conf settings, including any custom ones you have added Logger.info("MyExamplePlugin has started"); }override def onStop() { // you may want to tidy up resources here Logger.info("MyExamplePlugin has stopped"); } }Hooking a plugin into your application Regardless of the implementation language, plugins are invoked directly by Play once you have added them to the conf/play.plugins file. This file isn’t created when you start a new application, so you need to add it yourself. The syntax is <priority>:<classname>. For example, to add the example plugin to your project, you would use 10000:be.objectify.example.MyExamplePluginThe class name is that of your plugin. The priority determines the order in which plugins start up, and just needs to be a number that is larger or smaller than that of another plugin. If you have several plugins, you can explicitly order them: 5000:be.objectify.example.MyExamplePlugin 10000:be.objectify.example.MyOtherExamplePluginModules A module can be thought of as a reusable application that you can include in your own app. It’s analogous to a third-party library that adds specific functionality. A module can contain plugins, which you can hook into your app using the conf/play.plugins file. For example, if you’re using Deadbolt 2 you would need to add the following to your play.plugins file: 10000:be.objectify.deadbolt.DeadboltPluginA list of Play 2 modules can be found on the Play 2 GitHub wiki. You can read more on creating modules for Play 2 here and here. Reference: Play 2 – modules, plugins, what’s the difference? from our JCG partner Steve Chaloner at the Objectify blog....
apache-camel-logo

Apache Camel Tutorial – Introduction to EIP, Routes, Components, Testing and other Concepts

Data exchanges between companies increase a lot. The number of applications, which must be integrated increases, too. The interfaces use different technologies, protocols and data formats. Nevertheless, the integration of these applications shall be modeled in a standardized way, realized efficiently and supported by automatic tests. Such a standard exists with the Enterprise Integration Patterns (EIP) [1], which have become the industry standard for describing, documenting and implementing integration problems. Apache Camel [2] implements the EIPs and offers a standardized, internal domain-specific language (DSL) [3] to integrate applications. This article gives an introduction to Apache Camel including several code examples. Enterprise Integration Patterns EIPs can be used to split integration problems into smaller pieces and model them using standardized graphics. Everybody can understand these models easily. Besides, there is no need to reinvent the wheel every time for each integration problem. Using EIPs, Apache Camel closes a gap between modeling and implementation. There is almost a one-to-one relation between EIP models and the DSL of Apache Camel. This article explains the relation of EIPs and Apache Camel using an online shop example. Use Case: Handling Orders in an Online Shop The main concepts of Apache Camel are introduced by implementing a small use case. Starting your own project should be really easy after reading this article. The easiest way to get started is using a Maven archetype [4]. This way, you can rebuild the following example within minutes. Of course, you can also download the whole example at once[5]. Figure 1 shows the example from EIP perspective. The task is to process orders of an online shop. Orders arrive in csv format. At first, the orders have to be transformed to the internal format. Order items of each order must be split because the shop only sells dvds and cds. Other order items are forwarded to a partner.  Figure 1: EIP Perspective of the Integration Problem This example shows the advantages of EIPs: The integration problem is split into several small, perseverative subproblems. These subproblems are easy to understand and solved the same way each time. After describing the use case, we will now look at the basic concepts of Apache Camel. Basic Concepts Apache Camel runs on the Java Virtual Machine (JVM). Most components are realized in Java. Though, this is no requirement for new components. For instance, the camel-scala component is written in Scala. The Spring framework is used in some parts, e.g. for transaction support. However, Spring dependencies were reduced to a minimum in release 2.9 [6]. The core of Apache Camel is very small and just contains commonly used components (i.e. connectors to several technologies and APIs) such as Log, File, Mock or Timer. Further components can be added easily due to the modular structure of Apache Camel., Maven is recommended for dependency management, because most technologies require additional libraries. Though, libraries can also be downloaded manually and added to the classpath, of course. The core functionality of Apache Camel is its routing engine. It allocates messages based on the related routes. A route contains flow and integration logic. It is implemented using EIPs and a specific DSL. Each message contains a body, several headers and optional attachments. The messages are sent from a provider to a consumer. In between, the messages may be processed, e.g. filtered or transformed. Figure 1 shows how the messages can change within a route. Messages between a provider and a consumer are managed by a message exchange container, which contains an unique message id, exception information, incoming and outgoing messages (i.e. request and response), and the used message exchange pattern (MEP). „In Only“ MEP is used for one-way messages such as JMS whereas „In Out“ MEP executes request-response communication such as a client side HTTP based request and its response from the server side. After shortly explaining the basic concepts of Apache Camel, the following sections will give more details and code examples. Let’s begin with the architecture of Apache Camel. Architecture Figure 2 shows the architecture of Apache Camel. A CamelContext provides the runtime system. Inside, processors handle things in between endpoints like routing or transformation. Endpoints connect several technologies to be integrated. Apache Camel offers different DSLs to realize the integration problems.Figure 2: Architecture of Apache Camel CamelContext The CamelContext is the runtime system of Apache Camel and connects its different concepts such as routes, components or endpoints. The following code snipped shows a Java main method, which starts the CamelContext and stops it after 30 seconds. Usually, the CamelContext is started when loading the application and stopped at shutdown. public class CamelStarter {public static void main(String[] args) throws Exception { CamelContext context = new DefaultCamelContext();context.addRoutes(new IntegrationRoute()); context.start();Thread.sleep(30000);context.stop();}}The runtime system can be included anywhere in the JVM environment, including web container (e.g. Tomcat), JEE application server (e.g. IBM WebSphere AS), OSGi container, or even in the cloud. Domain Specific Languages DSLs facilitate the realization of complex projects by using a higher abstraction level. Apache Camel offers several different DSLs. Java, Groovy and Scala use object-oriented concepts and offer a specific method for most EIPs. On the other side, Spring XML DSL is based on the Spring framework and uses XML configuration. Besides, OSGi blueprint XML is available for OSGi integration. Java DSL has best IDE support. Groovy and Scala DSL are similar to Java DSL, in addition they offer typical features of modern JVM languages such as concise code or closures. Contrary to these programming languages, Spring XML DSL requires a lot of XML. Besides, it offers very powerful Spring-based dependency injection mechanism and nice abstractions to simplify configurations (such as JDBC or JMS connections). The choice is purely a matter of taste in most use cases. Even a combination is possible. Many developer use Spring XML for configuration whilst routes are realized in Java, Groovy or Scala. Routes Routes are a crucial part of Apache Camel. The flow and logic of an integration is specified here. The following example shows a route using Java DSL: public class IntegrationRoute extends RouteBuilder {@Overridepublic void configure() throws Exception {from(“file:target/inbox”).process(new LoggingProcessor()).bean(new TransformationBean(),“makeUpperCase”).to(“file:target/outbox/dvd”); }}The DSL is easy to use. Everybody should be able to understand the above example without even knowing Apache Camel. The route realizes a part of the described use case. Orders are put in a file directory from an external source. The orders are processed and finally moved to the target directory. Routes have to extend the „RouteBuilder“ class and override the „configure“ method. The route itself begins with a „from“ endpoint and finishes at one or more „to“ endpoints. In between, all necessary process logic is implemented. Any number of routes can be implemented within one „configure“ method. The following snippet shows the same route realized via Spring XML DSL: <beans … > <bean class=”mwea.TransformationBean” id=”transformationBean”/> <bean class=”mwea.LoggingProcessor” id=”loggingProcessor”/><camelContext xmlns=”http://camel.apache.org/schema/spring”> <package>mwea</package> <route> <from uri=”file:target/inbox”/><process ref=”loggingProcessor”/><bean ref=”transformationBean”/><to uri=”file:target/outbox”/> </route> </camelContext> </beans> Besides routes, another important concept of Apache Camel is its components. They offer integration points for almost every technology. Components In the meantime, over 100 components are available. Besides widespread technologies such as HTTP, FTP, JMS or JDBC, many more technologies are supported, including cloud services from Amazon, Google, GoGrid, and others. New components are added in each release. Often, also the community builds new custom components because it is very easy. The most amazing feature of Apache Camel is its uniformity. All components use the same syntax and concepts. Every integration and even its automatic unit tests look the same. Thus, complexity is reduced a lot. Consider changing the above example: If orders should be sent to a JMS queue instead of a file directory, just change the „to“ endpoint from „file:target/outbox“ to „jms:queue:orders“. That’s it! (JMS must be configured once within the application before, of course) While components offer the interface to technologies, Processors and Beans can be used to add custom integration logic to a route. Processors and Beans Besides using EIPs, you have to add individual integration logic, often. This is very easy and again uses the same concepts always: Processors or Beans. Both were used in the route example above. Processor is a simple Java interface with one single method: „process“. Inside this method, you can do whatever you need to solve your integration problem, e.g. transform the incoming message, call other services, and so on. public class LoggingProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { System.out.println(“Received Order: ” + exchange.getIn().getBody(String.class)); } } The „exchange“ parameter contains the Messsage Exchange with the incoming message, the outgoing message, and other information. Due to implementing the Processor interface, you have got a dependency to the Camel API. This might be a problem sometimes. Maybe you already have got existing integration code which cannot be changed (i.e. you cannot implement the Processor interface)? In this case, you can use Beans, also called POJOs (Plain Old Java Object). You get the incoming message (which is the parameter of the method) and return an outgoing message, as shown in the following snipped: public class TransformationBean {public String makeUpperCase(String body) {String transformedBody = body.toUpperCase();return transformedBody; } }The above bean receives a String, transforms it, and finally sends it to the next endpoint. Look at the route above again. The incoming message is a File. You may wonder why this works? Apache Camel offers another powerful feature: More than 150 automatic type converters are included from scratch, e.g. FileToString, CollectionToObject[] or URLtoInputStream. By the way: Further type converters can be created and added to the CamelContext easily [7]. If a Bean only contains one single method, it even can be omitted in the route. The above call therefore could also be .bean(new TransformationBean()) instead of .bean(new TransformationBean(), “makeUpperCase”). Adding some more Enterprise Integration Patterns The above route transforms incoming orders using the Translator EIP before processing them. Besides this transformation, some more work is required to realize the whole use case. Therefore, some more EIPs are used in the following example: public class IntegrationRoute extends RouteBuilder { @Override public void configure() throws Exception { from(“file:target/inbox”).process(new LoggingProcessor()) .bean(new TransformationBean()).unmarshal().csv() .split(body().tokenize(“,”)) .choice() .when(body().contains(“DVD”)) .to(“file:target/outbox/dvd”) .when(body().contains(“CD”)) .to(“activemq:CD_Orders”) .otherwise().to(“mock:others”);}}Each csv file illustrates one single order containing one or more order items. The camel-csv component is used to convert the csv message. Afterwards, the Splitter EIP separates each order item of the message body. In this case, the default separator (a comma) is used. Though, complex regular expressions or scripting languages such as XPath, XQuery or SQL can also be used as splitter. Each order item has to be sent to a specific processing unit (remember: there are dvd orders, cd orders, and other orders which are sent to a partner). The content-based router EIP solves this problem without any individual coding efforts. Dvd orders are processed via a file directory whilst cd orders are sent to a JMS queue. ActiveMQ is used as JMS implementation in this example. To add ActiveMQ support to a Camel application, you only have to add the related maven dependency for the camel-activemq component or add the JARs to the classpath manually. That’s it. Some other components need a little bit more configuration, once. For instance, if you want to use WebSphere MQ or another JMS implementation instead of ActiveMQ, you have to configure the JMS provider. All other order items besides dvds and cds are sent to a partner. Unfortunately, this interface is not available, yet. The Mock component is used instead to simulate this interface momentarily. The above example shows impressively how different interfaces (in this case File, JMS, and Mock) can be used within one route. You always apply the same syntax and concepts despite very different technologies. Automatic Unit and Integration Tests Automatic tests are crucial. Nevertheless, it usually is neglected in integration projects. The reason is too much efforts and very high complexity due to several different technologies. Apache Camel solves this problem: It offers test support via JUnit extensions. The test class must extend CamelTestSupport to use Camel’s powerful testing capabilities. Besides additional assertions, mocks are supported implicitly. No other mock framework such as EasyMock or Mockito is required. You can even simulate sending messages to a route or receiving messages from it via a producer respectively consumer template. All routes can be tested automatically using this test kit. It is noteworthy to mention that the syntax and concepts are the same for every technology, again. The following code snipped shows a unit test for our example route: public class IntegrationTest extends CamelTestSupport {@Beforepublic void setup() throws Exception {super.setUp(); context.addRoutes(new IntegrationRoute());}@Testpublic void testIntegrationRoute() throws Exception { // Body of test message containing several order itemsString bodyOfMessage = “Harry Potter / dvd, Metallica / cd, Claus Ibsen –Camel in Action / book “;// Initialize the mock and set expected results MockEndpoint mock = context.getEndpoint(“mock:others”, MockEndpoint.class); mock.expectedMessageCount(1);mock.setResultWaitTime(1000);// Only the book order item is sent to the mock// (because it is not a cd or dvd)String bookBody = “Claus Ibsen – Camel in Action / book”.toUpperCase();mock.expectedBodiesReceived(bookBody);// ProducerTemplate sends a message (i.e. a File) to the inbox directorytemplate.sendBodyAndHeader(“file://target/inbox”, bodyOfMessage, Exchange.FILE_NAME, “order.csv”);Thread.sleep(3000); // Was the file moved to the outbox directory?File target = new File(“target/outbox/dvd/order.csv”);assertTrue(“File not moved!”, target.exists());// Was the file transformed correctly (i.e. to uppercase)?String content = context.getTypeConverter().convertTo(String.class, target);String dvdbody = “Harry Potter / dvd”.toUpperCase(); assertEquals(dvdbody, content); // Was the book order (i.e. „Camel in action“ which is not a cd or dvd) sent to the mock?mock.assertIsSatisfied(); }}The setup method creates an instance of CamelContext (and does some additional stuff). Afterwards, the route is added such that it can be tested. The test itself creates a mock and sets its expectations. Then, the producer template sends a message to the „from“ endpoint of the route. Finally, some assertions validate the results. The test can be run the same way as each other JUnit test: directly within the IDE or inside a build script. Even agile Test-driven Development (TDD) is possible. At first, the Camel test has to be written, before implementing the corresponding route. If you want to learn more about Apache Camel, the first address should be the book „Camel in Action“ [8], which describes all basics and many advanced features in detail including working code examples for each chapter. After whetting your appetite, let’s now discuss when to use Apache Camel… Alternatives for Systems Integration Figure 3 shows three alternatives for integrating applications:Own custom Solution: Implement an individual solution that works for your problem without separating problems into little pieces. This works and is probably the fastest alternative for small use cases. You have to code all by yourself.Integration Framework: Use a framework, which helps to integrate applications in a standardized way using several integration patterns. It reduces efforts a lot. Every developer will easily understand what you did. You do not have to reinvent the wheel each time.Enterprise Service Bus (ESB): Use an ESB to integrate your applications. Under the hood, the ESB often also uses an integration framework. But there is much more functionality, such as business process management, a registry or business activity monitoring. You can usually configure routing and such stuff within a graphical user interface (you have to decide at your own if that reduces complexity and efforts). Usually, an ESB is a complex product. The learning curve is much higher than using a lightweight integration framework. Though, therefore you get a very powerful tool, which should fulfill all your requirements in large integration projects.If you decide to use an integration framework, you still have three good alternatives in the JVM environment: Spring Integration [9], Mule [10], and Apache Camel. They are all lightweight, easy to use and implement the EIPs. Therefore, they offer a standardized way to integrate applications and can be used even in very complex integration projects. A more detailed comparison of these three integration frameworks can be found at [11]. My personal favorite is Apache Camel due to its awesome Java, Groovy and Scala DSLs, combined with many supported technologies. Spring Integration and Mule only offer XML configuration. I would only use Mule if I need some of its awesome unique connectors to proprietary products (such as SAP, Tibco Rendevous, Oracle Siebel CRM, Paypal or IBM’s CICS Transaction Gateway). I would only use Spring Integration in an existing Spring project and if I only need to integrate widespread technologies such as FTP, HTTP or JMS. In all other cases, I would use Apache Camel. Nevertheless: No matter which of these lightweight integration frameworks you choose, you will have much fun realizing complex integration projects easily with low efforts. Remember: Often, a fat ESB has too much functionality, and therefore too much, unnecessary complexity and efforts. Use the right tool for the right job! Apache Camel is ready for Enterprise Integration Projects Apache Camel already celebrated its fourth birthday in July 2011 [12] and represents a very mature and stable open source project. It supports all requirements to be used in enterprise projects, such as error handing, transactions, scalability, and monitoring. Commercial support is also available. The most important gains is its available DSLs, many components for almost every thinkable technology, and the fact, that the same syntax and concepts can be used always – even for automatic tests – no matter which technologies have to be integrated. Therefore, Apache Camel should always be evaluated as lightweight alternative to heavyweight ESBs. Get started by downloading the example of this article. If you need any help or further information, there is a great community and a well-written book available. Sources:[1] „Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions“, ISBN: 0321200683, Gregor Hohpe, Bobby Woolf [2] Apache Camel http://camel.apache.org [3] Internal DSL http://martinfowler.com/bliki/DomainSpecificLanguage.html [4] Camel Archetypes http://camel.apache.org/camel-maven-archetypes.html [5] Example Code for this Article at github https://github.com/megachucky/camel-infoq [6] Reduced Dependency on Spring JARs http://davsclaus.blogspot.com/2011/08/apache-camel-29-reduced-dependency-on.html [7] Camel Type Converter http://camel.apache.org/type-converter.html [8] “Camel in Action”, ISBN: 1935182366, Claus Ibsen, Jonathan Anstey, Hadrian Zbarcea [9] Spring Integration www.springsource.org/spring-integration [10] Mule ESB http://www.mulesoft.org [11] Comparison of Apache Camel, Mule ESB and Spring Integration http://www.kai-waehner.de/blog/2012/01/10/spoilt-for-choice-which-integration-framework-to-use-spring-integration-mule-esb-or-apache-camel [12] Fourth Birthday of Apache Camel http://camel.apache.org/2011/07/07/happy-birthday-camel.htmlReference: Apache Camel Tutorial – Introduction to EIP, Routes, Components, Testing, and other Concepts from our JCG partner Kai Wahner at the Blog about Java EE / SOA / Cloud Computing blog....
software-development-2-logo

Code Forensics

How do you know if using code metrics really does help to produce code with fewer bugs. I am convinced they do, but how can I possibly prove it?All projects have historic data. This is usually stored in your bug tracking and source code control tools. We can use the data stored in these systems to perform ‘code forensics’. We use the historic data from real issues to see if they could have been avoided.This can all be done without affecting any of your existing code or adding any risk to your project. Surely that’s a useful Software Engineering technique?DisclaimerFirstly I realize that most bugs you find in a standard project are not caused by code quality – it’s probably only a small percentage. However the ones that exist are avoidable. It is these avoidable quality issues that I want to concentrate on. I want to be able to determine when exceeding a metric threshold is likely to result in a problem. It’s possible that if enough code forensics are run on my individual code base, I may be able to come up with some numbers that are useful to me in the future. In the long term it may be possible for someone to do a large study and come up with better guidelines.ProcessThe process is quite straightforward.1. Query your bug tracking tool for all the issues that required a code fix. 2. Assess the defects. 3. Identify the code. 4. Get the root cause.Query your bug tracking tool.First thing you need to do is to identify all your recent bugs, let’s say for the last month. Do a simple query to bring back all of the bugs during that period. This should be easy – otherwise you’re using the wrong tool!Now you have a full list of all of the defects that you are potentially interested inAssess the defects.You now need to go through each of the bugs and assess whether the issue really was a code issue. Other things it might be include.• A requirements issue. • An issue with the deployment environment. • Configuration IssuesWhat you are left with is a list of issues that were really caused because of bad code. Identify the problematic codeYou now need to map your list of issues back to the relevant source. You will not be able to do this unless you have been disciplined with your check in comments. In most places I have worked, when checking in a bug fix you always start it with a reference to the problem it fixes. Assuming you have been commenting your commits with the reference, you can do a simple query to see which code was affected. This can be done in fisheye, tortoise etc to get the required codeGet to the root cause.Finally you have something to look at, so what do you do with it? Well first you have to understand how the fix works and decide if it was a code quality issue. Perhaps the issue was a simple error rather than something a metric would have caught. However you might open the code and find like this. The average complexity in our system is 10. This piece of code has a complexity of 106! This was an accident waiting to happen!Clearly the bug would have been more likely to have been caught, had we failed the build because the code failed to meet expected quality standards. This is a potentially avoidable error.Another Angle.Another way to try and establish a link between poor code quality and defects is to take advantage of something such as the Sonar hotspot view to see the most complex classes in you system.You can then work backwards and examine the history of those files to see if those classes are causing issues in your code base. The trouble is that it is not that simple. High complexity files, which are used infrequently, are less likely to cause you trouble than those which are more frequently used, but of a lower complexity.Automating the process.For this to be any use it probably needs to be automated so that a large sample of data can be examined. Some tools already make this link between defects and the related fix source code. The next step is to pull that data back and run your metrics analysis on the files.SummaryNone of this is conclusive, however I still think it’s a useful technique.What it is most likely to prove is that you have had past problems which you could have avoided with metrics. It should also give you an idea of which metrics to use.It’s likely also to show that most problems are not caused by poor code quality, but other factors instead.Reference: Code Forensics from our JCG partner John Dobie at the Agile Engineering Techniques blog....
java-interview-questions-answers

ServletRequest startAsync() limited usefulness

Some time ago I came across What’s the purpose of AsyncContext.start(…) in Servlet 3.0? question. Quoting the Javadoc of aforementioned method:Causes the container to dispatch a thread, possibly from a managed thread pool, to run the specified Runnable.To remind all of you, AsyncContext is a standard way defined in Servlet 3.0 specification to handle HTTP requests asynchronously. Basically HTTP request is no longer tied to an HTTP thread, allowing us to handle it later, possibly using fewer threads. It turned out that the specification provides an API to handle asynchronous threads in a different thread pool out of the box. First we will see how this feature is completely broken and useless in Tomcat and Jetty – and then we will discuss why the usefulness of it is questionable in general.Our test servlet will simply sleep for given amount of time. This is a scalability killer in normal circumstances because even though sleeping servlet is not consuming CPU, but sleeping HTTP thread tied to that particular request consumes memory – and no other incoming request can use that thread. In our test setup I limited the number of HTTP worker threads to 10 which means only 10 concurrent requests are completely blocking the application (it is unresponsive from the outside) even though the application itself is almost completely idle. So clearly sleeping is an enemy of scalability.@WebServlet(urlPatterns = Array("/*")) class SlowServlet extends HttpServlet with Logging {protected override def doGet(req: HttpServletRequest, resp: HttpServletResponse) { logger.info("Request received") val sleepParam = Option(req.getParameter("sleep")) map {_.toLong} TimeUnit.MILLISECONDS.sleep(sleepParam getOrElse 10) logger.info("Request done") } }Benchmarking this code reveals that the average response times are close to sleep parameter as long as the number of concurrent connections is below the number of HTTP threads. Unsurprisingly the response times begin to grow the moment we exceed the HTTP threads count. Eleventh connection has to wait for any other request to finish and release worker thread. When the concurrency level exceeds 100, Tomcat begins to drop connections – too many clients are already queued.So what about the the fancy AsyncContext.start() method (do not confuse with ServletRequest.startAsync())? According to the JavaDoc I can submit any Runnable and the container will use some managed thread pool to handle it. This will help partially as I no longer block HTTP worker threads (but still another thread somewhere in the servlet container is used). Quickly switching to asynchronous servlet:@WebServlet(urlPatterns = Array("/*"), asyncSupported = true) class SlowServlet extends HttpServlet with Logging {protected override def doGet(req: HttpServletRequest, resp: HttpServletResponse) { logger.info("Request received") val asyncContext = req.startAsync() asyncContext.setTimeout(TimeUnit.MINUTES.toMillis(10)) asyncContext.start(new Runnable() { def run() { logger.info("Handling request") val sleepParam = Option(req.getParameter("sleep")) map {_.toLong} TimeUnit.MILLISECONDS.sleep(sleepParam getOrElse 10) logger.info("Request done") asyncContext.complete() } }) } }We are first enabling the asynchronous processing and then simply moving sleep() into a Runnable and hopefully a different thread pool, releasing the HTTP thread pool. Quick stress test reveals slightly unexpected results (here: response times vs. number of concurrent connections):                              Guess what, the response times are exactly the same as with no asynchronous support at all (!) After closer examination I discovered that when AsyncContext.start() is called Tomcat submits given task back to… HTTP worker thread pool, the same one that is used for all HTTP requests! This basically means that we have released one HTTP thread just to utilize another one milliseconds later (maybe even the same one). There is absolutely no benefit of calling AsyncContext.start() in Tomcat. I have no idea whether this is a bug or a feature. On one hand this is clearly not what the API designers intended. The servlet container was suppose to manage separate, independent thread pool so that HTTP worker thread pool is still usable. I mean, the whole point of asynchronous processing is to escape the HTTP pool. Tomcat pretends to delegate our work to another thread, while it still uses the original worker thread pool.So why I consider this to be a feature? Because Jetty is “broken” in exactly same way… No matter whether this works as designed or is only a poor API implementation, using AsyncContext.start() in Tomcat and Jetty is pointless and only unnecessarily complicates the code. It won’t give you anything, the application works exactly the same under high load as if there was no asynchronous logic at all.But what about using this API feature on correct implementations like IBM WAS? It is better, but still the API as is doesn’t give us much in terms of scalability. To explain again: the whole point of asynchronous processing is the ability to decouple HTTP request from an underlying thread, preferably by handling several connections using the same thread.AsyncContext.start() will run the provided Runnable in a separate thread pool. Your application is still responsive and can handle ordinary requests while long-running request that you decided to handle asynchronously are processed in a separate thread pool. It is better, unfortunately the thread pool and thread per connection idiom is still a bottle-neck. For the JVM it doesn’t matter what type of threads are started – they still occupy memory. So we are no longer blocking HTTP worker threads, but our application is not more scalable in terms of concurrent long-running tasks we can support.In this simple and unrealistic example with sleeping servlet we can actually support thousand of concurrent (waiting) connections using Servlet 3.0 asynchronous support with only one extra thread – and without AsyncContext.start(). Do you know how? Hint: ScheduledExecutorService.Postscriptum: Scala goodnessI almost forgot. Even though examples were written in Scala, I haven’t used any cool language features yet. Here is one: implicit conversions. Make this available in your scope:implicit def blockToRunnable[T](block: => T) = new Runnable { def run() { block } }And suddenly you can use code block instead of instantiating Runnable manually and explicitly:asyncContext start { logger.info("Handling request") val sleepParam = Option(req.getParameter("sleep")) map { _.toLong} TimeUnit.MILLISECONDS.sleep(sleepParam getOrElse 10) logger.info("Request done") asyncContext.complete() }Sweet!Reference: javax.servlet.ServletRequest.startAsync() limited usefulness from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close