Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

netbeans-logo

How To Write a NetBeans Plugin

Want to add a feature or automate something in your NetBeans IDE? Follow along as we write your first plugin for NetBeans. Let’s go beyond the simple Toolbar Example and create a plugin which can auto-update itself. This code is based on the WakaTime plugin for NetBeans. Our example plugin will simply print a Hello World statement and update to new versions if available… just enough to get you started. Create a new Plugin Project Choose File -> New Project then NetBeans Modules -> Module as the project type.Name your projectChoose a namespace or code name for your pluginAdd a Java FilePlugin Starting Point After creating the new Java Class file, make it extend ModuleInstall and wrap it with @OnShowing so it only runs after the GUI has loaded. @OnShowing public class MyPlugin extends ModuleInstall implements Runnable { } Press ALT + ENTER with your cursor over OnShowing then select Search Module Dependency for OnShowing to import the Window System API into the project. This will add a new dependency to your project as well as add the necessary import statements to the top of your file. Also do this for ModuleInstall.Sometimes NetBeans misses the org.openide.util dependency, so you might have to add that one manually. To do that, right click on MyPlugin then select Properties.Choose category Libraries then click Add.... Type org.openide.util then click OK. This will add the dependency to your project.xml file.Press ALT + ENTER on your MyPlugin class, then choose Implement all abstract methods.One last thing, add this line to your manifest.mf file. OpenIDE-Module-Install: org/myorg/myplugin/MyPlugin.classNow the run() method will execute after your plugin has loaded.Logging Let’s make that println output to the NetBeans IDE log. First, setup the logger as an attribute of your MyPlugin class. public static final Logger log = Logger.getLogger("MyPlugin"); Press ALT + ENTER to import java.util.logging.Logger.Replace println with log.info("MyPlugin has loaded.");.Updating Your Plugin Automatically Create a new Java file UpdateHandler.java inside your MyPlugin package. Replace the contents of this file with UpdateHandler.java. Search the module dependency and add any missing dependencies by pressing ALT + ENTER over each import statement. Add these lines to your manifest.mf file. OpenIDE-Module-Layer: org/myorg/myplugin/layer.xml OpenIDE-Module-Implementation-Version: 201501010101 Create a new XML document in your MyPlugin package.<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE filesystem PUBLIC "-//NetBeans//DTD Filesystem 1.2//EN" "http://www.netbeans.org/dtds/filesystem-1_2.dtd"> <filesystem> <folder name="Services"> <folder name="AutoupdateType"> <file name="org_myorg_myplugin_update_center.instance"> <attr name="displayName" bundlevalue="org.myorg.myplugin.Bundle#Services/AutoupdateType/org_myorg_myplugin_update_center.instance"/> <attr name="enabled" boolvalue="true"/> <attr name="instanceCreate" methodvalue="org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogFactory.createUpdateProvider"/> <attr name="instanceOf" stringvalue="org.netbeans.spi.autoupdate.UpdateProvider"/> <attr name="url" bundlevalue="org.myorg.myplugin.Bundle#org_myorg_myplugin_update_center"/> </file> </folder> </folder> </filesystem> Add this code to your MyPlugin class inside the run() method. WindowManager.getDefault().invokeWhenUIReady(new Runnable () { @Override public void run() { UpdateHandler.checkAndHandleUpdates(); } }); Add these lines to your Bundle.properties file: Services/AutoupdateType/org_myorg_myplugin_update_center.instance=MyPlugin UpdateHandler.NewModules=false org_myorg_myplugin_update_center=https\://example.com/updates.xml Now every time NetBeans restarts and launches your plugin, it will check for updates by downloading updates.xml from example.com. Your updates.xml file tells NetBeans where to get the new NBM of your plugin. To create an NBM for publishing your plugin, right click on your MyPlugin project and select Create NBM. The NBM file is what you will publish to the NetBeans Plugin Portal. For an example of hosting updates.xml on GitHub, look at update.xml and corrosponding Bundle.properties from the WakaTime NetBeans plugin.Reference: How To Write a NetBeans Plugin from our JCG partner Alan Hamlett at the Wakatime blog....
java-interview-questions-answers

Integrating CDI and WebSockets

Thought of experimenting with a simple Java EE 7 prototype application involving JAX-RS (REST), WebSockets and CDI. Note: Don’t want this to be a spoiler – but this post mainly talks about an issue which I faced while trying to use web sockets and REST using CDI as a ‘glue’ (in a Java EE app). The integration did not materialize, but a few lessons learnt nonetheless :-) The idea was to use a REST end point as a ‘feed’ for a web socket end point which would in turn ‘push’ data to all connected clients:    JAX-RS end point which receives data (possibly in real time) from other sources as an input to the web socket end point Use CDI Events as the glue b/w JAX-RS and WebSocket end points and ‘fire’ the payload @Path("/feed") public class RESTFeed {@Inject Event<String> event;@POST @Consumes(MediaType.TEXT_PLAIN) public void push(String msg) { event.fire(msg); } }Use a CDI Observer method in the WebSocket endpoint implementation to push data to connected clients: public void onMsg(@Observes String msg) {//different WS enpoint instance - notice the hash code value in the server log System.out.println("WS End point class ID -- " + this.hashCode()); try {client.getBasicRemote().sendText(msg);} catch (IOException ex) { Logger.getLogger(ServerEndpoint.class.getName()).log(Level.SEVERE, null, ex); } }Of course, finer details like performance, async communication etc have not being considered at this point of time. More of an experiment But is this even possible ? Here are the steps which I executedDeployed the code Browsed to http://localhost:8080/Explore-WebSocket-CDI-Integration-Maven/ and connected as a web socket clientFired a HTTP POST request on the REST end point using PostmanBoom! A NullPointerException in the Observer method – I waited for a few seconds and then reality hit me!Root cause (from what I understand)Behavior of WebSocket end pointsWebSocket end points are similar to JAX-RS resource classes in the sense that there is one instance of a web socket endpoint class per connected client (at least by default). This is clearly mentioned in the WebSocket specification. As soon as a client (peer) connects, a unique instance is created and one can safely cache the web socket Session object (representation of the peer) as an instance variable. IMO, this a simple and clean programming modelBut the CDI container had other plans !As soon as the REST end point fires a CDI event (in response to a POST request), the CDI container creates a different instance of the WebSocket endpoint (the CDI Observer in this case). Why? Because CDI beans are contextual in nature. The application does not control the instances of CDI beans. It just uses them (via @Inject). Its up to the container to create and destroy bean instances and ensure that an appropriate instance is available to beans executing in the same context. How does the container figure out the context though ? It’s via Scopes – Application, Session, Request etc….. (again, clearly mentioned in the CDI specification)So, the gist of the matter is that there is NO instance of the WebSocket endpoint current context – hence a new instance is created by CDI in order to deliver the message. This of course means that the instance variable would point to null and hence the NPE (Duh !) So the question is . . . Which CDI scope is to be used for a WebSocket end point ??? I tried @ApplicationScoped, @SessionScoped and @RequestScoped without much luck – still a new instance and a NPE Any other options ??Defining a Set of Session as static variable will do the trick: private static Set<Session> peers = Collections.synchronizedSet(new HashSet());But that IMO is a just a hack and not feasible in case one needs to handle client specific state (which can only be handled as instance variables) in the observer method – it’s bound to remain uninitializedServer Sent events ? But at the end of the day, SSE != WebSocket. In case the use case demands server side push ‘only’, one can opt for it. SSE is not a Java EE standard yet – Java EE 8 might make this possibleSolution ? I am not an expert – but I guess it’s up to the WebSocket spec to provide more clarity on how to leverage it with CDI. Given that CDI is an indispensable part of the Java EE spec, it’s extremely important that it integrates seamlessly with other specifications – specially HTML5-centric specs such as JAX-RS, WebSocket etc This post by Bruno Borges links to similar issues related to JMS, CDI and WebSocket and how they integrate with each other. Did I miss something obvious? Do you have any inputs/solutions? Please feel free to chime in ! :-) The sample code is available on GitHub (in case you want to take a look). I tried this on GlassFish 4.1 and Wildfly 8.2.0 That’s all for now I guess…. :-) Cheers!Reference: Integrating CDI and WebSockets from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
software-development-2-logo

3 Hacks to Increase Your Productivity

Logging programming time can make you a better coder. To get good at something you need to work on it for 10,000+ hours. Using a tool like WakaTime, you can log your coding hours and utilize these hacks to up your programming game.            Track yourself learning to code Regardless of what the “be a programmer in 30 minutes” courses tell you, computer science is a study of discipline. By tracking your time coding on a daily, weekly and monthly basis you can observe how hard you are working and how your skill is developing. You should need a lot more work to complete tasks in the beginning than in the end of a course. This would manifest itself in the higher number of files you touch in the same amount of time. Sharing the stats with your teacher will also help. Hold yourself accountable for new skills How many times have you thought about learning a new language but reverted to your familiar ones? Thursday, our awesome team member, saw a friend slacking when trying to learn Go. She recommended WakaTime’s language breakdown. After her friend saw how little time she actually spent using Go, she became more motivated and got focused on using the new language. See how much time you spend on documentation Sometimes, you might spend time on documentation but don’t know if you are doing enough for yourself or your employer. Taking a look at the markdown files in your WakaTime time logs will show you how much effort you put writing docs. Overall, quantifying your effort goes a long way in achieving your goals. Programming is no different. So start logging, and begin improving.Reference: 3 Hacks to Increase Your Productivity from our JCG partner Priyanka Sharma at the Wakatime blog....
Jersey-logo

Per client cookie handling with Jersey

A lot of REST services will use cookies as part of the authentication / authorisation scheme. This is a problem because by default the old Jersey client will use the singleton CookieHandler.getDefault which is most cases will be null and if not null will not likely work in a multithreaded server environment. (This is because in the background the default Jersey client will use URL.openConnection). Now you can work around this by using the Apache HTTP Client adapter for Jersey; but this is not always available. So if you want to use the Jersey client with cookies in a server environment you need to do a little bit of reflection to ensure you use your own private cookie jar.   final CookieHandler ch = new CookieManager(); Client client = new Client(new URLConnectionClientHandler( new HttpURLConnectionFactory() {@Override public HttpURLConnection getHttpURLConnection(URL uRL) throws IOException { HttpURLConnection connect = (HttpURLConnection) uRL.openConnection();try { Field cookieField = connect.getClass().getDeclaredField("cookieHandler"); cookieField.setAccessible(true); MethodHandle mh = MethodHandles.lookup().unreflectSetter(cookieField); mh.bindTo(connect).invoke(ch); } catch (Throwable e) { e.printStackTrace(); }return connect; } })); This will only work if your environment is using the internal implementation of sun.net.www.protocol.http.HttpURLConnection that comes with the JDK. This appears to be the case for modern versions of WLS. For JAX-RS 2.0 you can do a similar change using Jersey 2.x specific ClientConfig class and HttpUrlConnectorProvider. final CookieHandler ch = new CookieManager();Client client = ClientBuilder.newClient(new ClientConfig().connectorProvider(new HttpUrlConnectorProvider().connectionFactory(new HttpUrlConnectorProvider.ConnectionFactory() { @Override public HttpURLConnection getConnection(URL uRL) throws IOException { HttpURLConnection connect = (HttpURLConnection) uRL.openConnection();try { Field cookieField = connect.getClass().getDeclaredField("cookieHandler"); cookieField.setAccessible(true); MethodHandle mh = MethodHandles.lookup().unreflectSetter(cookieField); mh.bindTo(connect).invoke(ch); } catch (Throwable e) { e.printStackTrace(); }return connect; } }))); Update 11th Feb 2015: It seems in some cases, in particular using https, I have seen the HttpURLConnection wrapped in another class, to work around this just use reflection to access the value of the delegate field. I have updated the code examples to reflect this issue.Reference: Per client cookie handling with Jersey from our JCG partner Gerard Davison at the Gerard Davison’s blog blog....
software-development-2-logo

Don’t waste time tracking technical debt

For the last couple of years we’ve been tracking technical debt in our development backlog. Adding debt payments to the backlog, making the cost and risk of technical debt visible to the team and to the Product Owner, prioritizing payments with other work, is supposed to ensure that debt gets paid down. But I am not convinced that it is worth it. Here’s why: Debt that’s not worth tracking because it’s not worth paying off Some debt isn’t worth worrying about. A little (but not too much) copy-and-paste. Fussy coding-style issues picked up by some static analysis tools (does it really matter where the brackets are?). Poor method and variable naming. Methods which are too big. Code that doesn’t properly follow coding standards or patterns. Other inconsistencies. Hard coding. Magic numbers. Trivial bugs. This is irritating, but it’s not the kind of debt that you need to track on the backlog. It can be taken care of in day-to-day opportunistic refactoring. The next time you’re in the code, clean it up. If you’re not going to change the code, then who cares? It’s not costing you anything. If you close your eyes and pretend that it’s not there, nothing really bad will happen. Somebody else’s debt Out of date Open Source or third party software. The kind of things that Sonatype CLM or OWASP’s Dependency Check will tell you about. Some of this is bad – seriously bad. Exploitable security vulnerabilities. Think Heartbleed. This shouldn’t even make it to the backlog. It should be fixed right away. Make sure that you know that you can build and roll out a patched library quickly and with confidence (as part of your continuous build/integration/delivery pipeline). Everything else is low priority. If there’s a newer version with some bug fixes, but the code works the way you want it to, does it really matter? Upgrading for the sake of upgrading is a waste of time, and there’s a chance that you could introduce new problems, break something that you depend on now, with little or no return. Remember, you have the source code – if you really need to fix something or add something, you can always do it yourself. Debt you don’t know that you have Some of the scariest debt is the debt that you don’t know you have. Debt that you took on unconsciously because you didn’t know any better… and you still don’t. You made some bad design decisions. You didn’t know how to use your application framework properly. You didn’t know about the OWASP Top 10 and how to protect against common security attacks. This debt can’t be on your backlog. If something changes – a new person with more experience joins the team, or you get audited, or you get hacked – this debt might get exposed suddenly. Otherwise it keeps adding up, silently, behind the scenes. Debt that is too big to deal with There’s other debt that’s too big to effectively deal with. Like the US National Debt. Debt that you took on early by making the wrong assumptions or the wrong decisions. Maybe you didn’t know you were wrong then, but now you do. You – or somebody before you – picked the wrong architecture. Or the wrong language, or the wrong framework. Or the wrong technology stack. The system doesn’t scale. Or it is unreliable under load. Or it is full of security holes. Or it’s brittle and difficult to change. You can’t refactor your way out of this. You either have to put up with it as best as possible, or start all over again. Tracking it on your backlog seems pointless: As a developer, I want to rewrite the system, so that everything doesn’t suck…. Fix it now, or it won’t get fixed at all Technical debt that you can do something about is debt that you took on consciously and deliberately – sometimes responsibly, sometimes not. h You took short cuts in order to get the code out for fast feedback (A/B testing, prototyping). There’s a good chance that you’ll have to rewrite it or even throw it out, so why worry about getting the code right the first time? This is strategic debt – debt that you can afford to take it on, at least for a while. Or you were under pressure and couldn’t afford to do it right, right then. You had to get it done fast, and the results aren’t pretty. The code works, but it is a hack job. You copied and pasted too much. You didn’t follow conventions. You didn’t get the code reviewed. You didn’t write tests, or at least not enough of them. You left in some debugging code. It’s going to be a pain to maintain. If you don’t get to this soon, if you don’t clean it up or rewrite it in a few weeks or a couple of months, then there is a good chance that this debt will never get paid down. The longer it stays, the harder it is to justify doing anything about it. After all, it’s working fine, and everyone has other things to do. The priority of doing something about it will continue to fall, until it’s like silt, settling to the bottom. Eventually you’ll forget that it’s there. When you see it, it will make you a little sad, but you’ll get over it. Like the shoppers in New York City, looking up at the US National Debt Clock, on their way to the store to buy a new TV on credit. And hey, if you’re lucky, this debt might get paid down without you knowing about it. Somebody refactors some code while making a change, or maybe even deletes it because the feature isn’t used any more, and the debt is gone from the code base. Even though it is still on your books. Don’t track technical debt. Deal with it instead Tracking technical debt sounds like the responsible thing to do. If you don’t track it, you can’t understand the scope of it. But whatever you record in your backlog will never be an accurate or complete record of how much debt you actually have – because of the hidden debt that you’ve taken on unintentionally, the debt that you don’t understand or haven’t found yet. More importantly, tracking work that you’re not going to do is a waste of everyone’s time. Only track debt that everyone (the team, the Product Owner) agrees is important enough to pay off. Then make sure to pay it off as quickly as possible. Within 1 or 2 or maybe 3 sprints. Otherwise, you can ignore it. Spend your time refactoring instead of junking up the backlog. This isn’t being irresponsible. It’s being practical.Reference: Don’t waste time tracking technical debt from our JCG partner Jim Bird at the Building Real Software blog....
java-interview-questions-answers

OpenShift DIY: Build Spring Boot / Undertow application with Gradle

Gradle 1.6 was the last supported Gradle version to run on OpenShift due to this bug. But as of Gradle 2.2 this is no more an issue, so running newest Gradle on OpenShift should not be a problem anymore with Do It Yourself cartridge. DIY cartridge is an experimental cartridge that provides a way to test unsupported languages on OpenShift. It provides a minimal, free-form scaffolding which leaves all details of the cartridge to the application developer. This blog post illustrates the use of Spring Boot 1.2 and Java 8 running on Undertow, that is supported as a lightweight alternative to Tomcat. It should not take more than 10 minutes to get up and running. Prerequisite Before we can start building the application, we need to have an OpenShift free account and client tools installed. Step 1: Create DIY application To create an application using client tools, type the following command: rhc app create <app-name> diy-0.1 This command creates an application using DIY cartridge and clones the repository to directory. Step 2: Delete Template Application Source code OpenShift creates a template project that can be freely removed: git rm -rf .openshift README.md diy misc Commit the changes: git commit -am "Removed template application source code" Step 3: Pull Source code from GitHub git remote add upstream https://github.com/kolorobot/openshift-diy-spring-boot-gradle.git git pull -s recursive -X theirs upstream master Step 4: Push changes The basic template is ready to be pushed to OpenShift: git push The initial deployment (build and application startup) will take some time (up to several minutes). Subsequent deployments are a bit faster: remote: BUILD SUCCESSFUL remote: Starting DIY cartridge remote: XNIO NIO Implementation Version 3.3.0.Final remote: b.c.e.u.UndertowEmbeddedServletContainer : Undertow started on port(s) 8080 (http) remote: Started DemoApplication in 15.156 seconds (JVM running for 17.209) You can now browse to: http://<app-name>.rhcloud.com/manage/health and you should see: { "status": "UP", } When you login to your OpenShift web account and navigate to Applications you should see the new one:Under the hood Why DIY? Spring Boot application can be deployed to Tomcat cartridge on OpenShift. But at this moment no Undertow and Java 8 support exists, therefore DIY was selected. DIY has limitations: it cannot be scaled for example. But it is perfect for trying and playing with new things. Application structure The application is a regular Spring Boot application, that one can bootstrapped with http://start.spring.io. Build system used is Gradle, packaging type is Jar. As of Spring Boot 1.2 Undertow lightweight and performant Servlet 3.1 container is supported. In order to use Undertow instead of Tomcat, Tomcat dependencies must be exchanged with Undertow ones: buildscript { configurations { compile.exclude module: "spring-boot-starter-tomcat" } }dependencies { compile("org.springframework.boot:spring-boot-starter-undertow") } OpenShift specificic configuration – application-openshift.properties – contains the logging configuration at the moment: logging.file=${OPENSHIFT_DATA_DIR}/logs/app.log OpenShift action_hooks OpenShift executes action hooks script files at specific points during the deployment process. All hooks are placed in the .openshift/action_hooks directory in the application repository. Files must have be executable. In Windows, in Git Bash, the following command can be used: git update-index --chmod=+x .openshift/action_hooks/* Deploying the application The deploy script downloads Java 8 and Gradle 2.2, creates some directories. Downloading Gradle is done the following way: if [ ! -d $OPENSHIFT_DATA_DIR/gradle-2.2.1 ] then cd $OPENSHIFT_DATA_DIR wget https://services.gradle.org/distributions/gradle-2.2.1-bin.zip unzip gradle-2.2.1-bin.zip rm -f gradle-2.2.1-bin.zip fi After running the script the following directories will be created in $OPENSHIFT_DATA_DIR: gradle gradle-2.2.1 jdk1.8.0_20 logs In addition, the script exports couple of environment variables required to properly run Java 8 / Gradle build. GRADLE_USER_HOME is most important one as it sets the home directory where all the Gradle runtime files will be stored, including downloaded dependencies used to build the application. The final command of the deploy script is to run Gradle task to create an jar archive that can be executed from the command line using java -jar commnad (see next paragraph): gradle bootRepackage Starting the application When deploy script finishes successfully, the build directory will contain a single jar with the Spring Boot application assembled. The application is started and bound to the server address and port provided by OpenShift. In addition, the profile name is provided, so additional properties file can be loaded. The final command that runs the application is as follows: nohup java -Xms384m -Xmx412m -jar build/*.jar --server.port=${OPENSHIFT_DIY_PORT} --server.address=${OPENSHIFT_DIY_IP} --spring.profiles.active=openshift & ReferencesThe project source code, used throughout this article, can be found on GitHub: https://github.com/kolorobot/openshift-diy-spring-boot-sample Spring Boot documentation: http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#cloud-deployment-openshift Some OpenShift references used while creating this article:https://blog.openshift.com/run-gradle-builds-on-openshift https://blog.openshift.com/tips-for-creating-openshift-apps-with-windowsReference: OpenShift DIY: Build Spring Boot / Undertow application with Gradle from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
liferay-logo

Installing Liferay 6.2 Enterprise Edition on Websphere 8.0

Preparing Websphere for Liferay When the application server binaries have been installed, start the Websphere application server (WAS) Profile Management Tool to create a profile appropriate for Liferay and follow the instructions as described here in the official Liferay documentation. The instructions are for installing Liferay 6.2 on Websphere 8.5, but I have installed it on Websphere 8.0 successfully following the same instructions.        NotesI did not want Websphere to manage the database connections, so I skipped the database step as long as I am using Liferay’s standard database configuration. I also skipped the mail configuration step because I did not care for that. Before completing the new profile wizard note down the port of where the administrative console runs (in my case it was 9062) I did not create the file portal-ext.properties. I used the Liferay setup wizard. I start and stop Websphere through Rational Application Developer for Websphere (see here). On windows you can also start stop the server using the IBM Websphere start menu tools.Deploy LiferayClick Applications → New Application → New Enterprise Application. Browse to the Liferay .war file and click Next. I downloaded the Enterprise Edition 6.2 trial standalone. You will get an email with an xml license file. Leave Fast Path selected and click Next. Ensure that Distribute Application has been checked, and click Next again. Choose the Websphere runtimes and/or clusters where you want Liferay deployed. Click Next. Map Liferay to your preferred root context (/javacodegeeks/ for example) and click Next. Ensure that you have made all the correct choices and click Finish. When Liferay has installed, click Save to Master Configuration.You’ve now installed Liferay, but don’t start it yet. If you wish to use PACL, you have one more thing to configure (see Enabling Security for Portal Access Control Lists section – Liferay documentation).Starting LiferayClick Applications → Application Types → Websphere enterprise applications Select the Liferay .war checkbox and click startIf no error shows up, go to http://localhost:<yourport>/<your contextroot> If Liferay redirects you to a page that informs you that a license is missing. Drop the xml license file that Liferay sent you by email at <WebsphereHome>\AppServer\profiles\<yourprofile>\Liferay\deploy\ and restart the server. You must see the Liferay portal basic configuration page. That means that you have successfully installed Liferay on Websphere application server. Enjoy!Common issues Identifying the ports If you are not familiarized with the Websphere application server, like me, It may be difficult to find where the application server administration control and Liferay runs. The administration control of Websphere runs at http://localhost:9062/ibm/console/ in my case (the default port is 9063 I think). In your case the port may be different. You can also see the port where the administration console runs from Websphere logs when it starts. You can use again the windows start menu tools for IBM Websphere as shown at the picture 2 above.Once you have successfully launched the administration console you can see a complete catalogue of the ports. Click Servers→Websphere application servers→<YourServerName>→ Configuration tab → Communications → PortsAs you can see the administrative console runs on port 9062 and Liferay should run on port 9082. So if the context root you have chosen is javacodegeeks Liferay should run at http://localhost:9082/javacodegeeks Increasing JVM Maximum heap size One other common issue is that when Liferay is deployed to WAS the pages are loaded very slow or keep hanging until java.lang.OutOfMemoryError comes up and the server stops. We can easily increase WAW maximum heap size – in most cases it solves this issue. In Websphere administration console:Click Servers -> Server Types -> Websphere application servers -> Server Infrastructure -> Java and Process Management -> Process definition. In Additional Properties section, select Java Virtual Machine In General Properties section, put 256 for “Initial heap size” and 1024 for “Maximum heap size”. Done, restart Websphere.ReferencesLiferay documentation http://www.mkyong.com/websphere/how-to-increase-websphere-jvm-memory/...
agile-logo

Writing Just Enough Documentation

One of the misconception that is often associated with agile software development is that agile teams won’t write any technical documentation. I assume that this misconception is so common because the agile manifesto states that we should value working software over comprehensive documentation. Also, since some of us have experience from writing long technical documents that are not read or updated after we finished them, it is kind of natural to think that all technical documentation is waste. However, this is a lie that has serious consequences. If you have ever tried to maintain an application that has no technical documentation, you know what these consequences are. Once again we have to find a tradeoff between two bad options. Let’s get started. Writing Good Documents Before agile software development became popular, we spent considerable amount of time for writing long documents that were not read by anyone after we finished them. It was quite common that a software project had a lot of documentation, but most of them were useless because they were hopelessly outdated. It is clear that these traditional practices create a lot of waste, and it really makes no sense to write documents that are abandoned after the project is finished. There must be a better way. We can find a better way by answering to the question: what is a good document? I think that a good document fulfills these requirements:It has a “customer” who needs its information. For example: a developer needs the deployment instructions when he is deploying the application to the production environment. It is as short as possible but not shorter. A good document provides the required information to the reader as fast as possible. It must not contain unnecessary information that might disturb the reader AND it must not miss any relevant information. It is up-to-date.If we want to write technical documents that fulfill these requirements, we should follow these rules:We should not write a document that is written only because the process requires it. If the information found from the document is not needed by anyone, we should not write it. We should keep the documentation as light as possible. Because shorter documents are easier to update, it is more likely that these documents are really updated. Also, because shorter documents are faster to read, we won’t be wasting the time of the persons who read them. We should put the documents to the place where they are needed. For example: the documents that are read (and written) by developers should be committed to the version control. This way every developer can access them and we can use code reviews for ensuring that these documents are updated when the code is changed. Every document that is committed to the version control system must be written by using a text based format. My favorite tool for the job is Asciidoctor but Markdown is a good choice as well.Let’s take a look at concrete examples that demonstrate what these rules really mean. What Kind of Documents Do We Need? If we want to figure out what kind of documents could be useful to us, we have to follow these steps:Figure out what we have to do. Find out what information we need so that we can do these things.If we think about a typical software development project or an application that is currently in the maintenance phase, we need to:Install or deploy our application. We can write instructions that describe how we can install (or deploy) our application. If we have to install other applications before we can install (or deploy) our application, these instructions must describe how we can install the required applications. Configure our application. If our application has a complex configuration (and real apps often do), we need instructions that describe how we can configure our application. The simplest way to write such instructions is to add comments to the configuration files of our application, but sometimes we have to write additional “tutorials” that describe the most common scenarios. Make changes to the code written by other developers. Before we can make changes to a piece of code, we have to understand two things: 1) how it is supposed to work and 2) how it works at the moment. Technical documentation cannot help us to understand how the code is supposed to work, but it must help us understand how it is working at the moment. Surprisingly, we can write the necessary documentation without writing a single document. We can document our code by adding Javadocs to our classes and transforming our tests into executable specifications. Solve the problems that occur in the production environment. If we would live in a perfect world, we would ensure that we don’t have to solve the same problem twice. However, because we cannot always ensure this, it makes sense to ensure that we can identify the common problems and solve them as fast as fast possible. One way to do this is to create a FAQ that describes these problems and their solutions. Every FAQ entry must describe the problem and provide the information that is required to identify it. It should also describe the steps that are required to solve the problem. The person who solves a new problem must add a new FAQ entry to the FAQ. Help new developers to familiarize themselves with the codebase. If our codebase has good Javadocs and clean tests, we don’t necessarily need to write new documents. However, often our codebase is so large and complex that it is really hard to understand the big picture. That is why we often end up writing an architecture specification document that becomes outdated because no one bothers to update it. We can try to avoid this situation by keeping this document as thin as possible. If I have to write an architecture specification, I write a document that provides a brief description of the overall architecture, describes the modules and their responsibilities, describes how the cross-cutting concerns (authentication, authorization, error handling, validation, and transactions) are implemented, and describes the integrations.It is a somewhat easy to think that I try argue that we should always write these documents. However, this would be a mistake. Do We Really Need All These Documents? It depends. Every software project is different and it is impossible to say what kind of information is needed. That is why I think that putting our faith into best practices or processes, which specify what document we must write, does not help us to be more agile. It only ensures that most of the documents we write are waste. We must stop looking for a silver bullet. We must stop following best practices. In fact, we must stop thinking about documents. If want to eliminate the waste caused by writing obsolete documents, we should think about the information that we need and figure out a way to distribute this information to our existing and future team members. That is agile.Reference: Writing Just Enough Documentation from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
java-interview-questions-answers

Retry-After HTTP header in practice

Retry-After is a lesser known HTTP response header. Let me quote relevant part of RFC 2616 (HTTP 1.1 spec): 14.37 Retry-After The Retry-After response-header field can be used with a 503 (Service Unavailable) response to indicate how long the service is expected to be unavailable to the requesting client. This field MAY also be used with any 3xx (Redirection) response to indicate the minimum time the user-agent is asked wait before issuing the redirected request. The value of this field can be either an HTTP-date or an integer number of seconds (in decimal) after the time of the response. Retry-After = "Retry-After" ":" ( HTTP-date | delta-seconds ) Two examples of its use are: Retry-After: Fri, 31 Dec 1999 23:59:59 GMT Retry-After: 120 In the latter example, the delay is 2 minutes. Although the use case with 3xx response is interesting, especially in eventually consistent systems (“your resource will be available under this link within 2 seconds), we will focus on error handling. By adding Retry-After to response server can give a hint to the client when it will become available again. One might argue that the server hardly ever knows when it will be back on-line, but there are several valid use cases when such knowledge can be somehow inferred:Planned maintenance – this one is obvious, if your server is down within scheduled maintenance window, you can send Retry-After from proxy with precise information when to call back. Clients won’t bother retrying earlier, of course IF they understand and honour this header Queue/thread pool full – if your request must be handled by a thread pool and it’s full, you can estimate when next request can be handled. This requires bound queue (see: ExecutorService – 10 tips and tricks, point 6.) and rough estimate how long it takes for one task to be handled. Having this knowledge you can estimate when next client can be served without queueing. Circuit breaker open – in Hystrix you can query Next available token/resource/whateverLet’s focus on one non-trivial use case. Imagine your web service is backed by Hystrix command: private static final HystrixCommand.Setter CMD_KEY = HystrixCommand.Setter .withGroupKey(HystrixCommandGroupKey.Factory.asKey("REST")) .andCommandKey(HystrixCommandKey.Factory.asKey("fetch")); @RequestMapping(value = "/", method = GET) public String fetch() { return fetchCommand().execute(); } private HystrixCommand<String> fetchCommand() { return new HystrixCommand<String>(CMD_KEY) { @Override protected String run() throws Exception { //... } }; } This works as expected, if command fails, times out or circuit breaker is open, client will receive 503. However in case of circuit breaker we can at least estimate how long would it take for circuit to close again. Unfortunately there is no public API telling for how long exactly circuit will remain open in case of catastrophic failures. But we know for how long by default circuit breaker remains open, which is a good max estimate. Of course circuit may remain open if underlying command keeps failing. But Retry-After doesn’t guarantee that a server will operate upon given time, it’s just a hint for the client to stop trying beforehand. The following implementation is simple, but broken: @RequestMapping(value = "/", method = GET) public ResponseEntity<String> fetch() { final HystrixCommand<String> command = fetchCommand(); if (command.isCircuitBreakerOpen()) { return handleOpenCircuit(command); } return new ResponseEntity<>(command.execute(), HttpStatus.OK); } private ResponseEntity<String> handleOpenCircuit(HystrixCommand<String> command) { final HttpHeaders headers = new HttpHeaders(); final Integer retryAfterMillis = command.getProperties() .circuitBreakerSleepWindowInMilliseconds().get(); headers.set(HttpHeaders.RETRY_AFTER, Integer.toString(retryAfterMillis / 1000)); return new ResponseEntity<>(headers, HttpStatus.SERVICE_UNAVAILABLE); } As you can see we can ask any command whether its circuit breaker is open or not. If it’s open, we set Retry-After header with circuitBreakerSleepWindowInMilliseconds value. This solution has a subtle but disastrous bug: if circuit becomes open one day, we never run command again because we eagerly return 503. This means Hystrix will never re-try executing it and circuit will remain open forever. We must attempt to call command every single time and catch appropriate exception: @RequestMapping(value = "/", method = GET) public ResponseEntity<String> fetch() { final HystrixCommand<String> command = fetchCommand(); try { return new ResponseEntity<>(command.execute(), OK); } catch (HystrixRuntimeException e) { log.warn("Error", e); return handleHystrixException(command); } } private ResponseEntity<String> handleHystrixException(HystrixCommand<String> command) { final HttpHeaders headers = new HttpHeaders(); if (command.isCircuitBreakerOpen()) { final Integer retryAfterMillis = command.getProperties() .circuitBreakerSleepWindowInMilliseconds().get(); headers.set(HttpHeaders.RETRY_AFTER, Integer.toString(retryAfterMillis / 1000)); } return new ResponseEntity<>(headers, SERVICE_UNAVAILABLE); } This one works well. If command throws an exception and associated circuit is open, we set appropriate header. In all examples we take milliseconds and normalize to seconds. I wouldn’t recommend it, but if for some reason you prefer absolute dates rather than relative timeouts in Retry-After header, HTTP date formatting is finally part of Java (since JDK 8): import java.time.format.DateTimeFormatter; //... final ZonedDateTime after5seconds = ZonedDateTime.now().plusSeconds(5); final String httpDate = DateTimeFormatter.RFC_1123_DATE_TIME.format(after5seconds); A note about auto-DDoS You have to be careful with Retry-After header if you send the same timestamp to a lot of unique clients. Imagine it’s 15:30 and you send Retry-After: Thu, 10 Feb 2015 15:40:00 GMT to everyone around – just because you somehow estimated that service will be up at 15:40. The longer you keep sending the same timestamp, the bigger DDoS “attack” you can expect from clients respecting Retry-After. Basically everyone will schedule retry precisely at 15:40 (obviously clocks are not perfectly aligned and network latency varies, but still), flooding your system with requests. If your system is properly designed, you might survive it. However chances are you will mitigate this “attack” by sending another fixed Retry-After header, essentially re-scheduling attack later. That being said avoid fixed, absolute timestamps sent to multiple unique clients. Even if you know precisely when your system will become available, spread Retry-After values along some time period. Actually you should gradually let in more and more clients, so experiment with different probability distributions. Summary Retry-After HTTP response header is neither universally known nor often applicable. But in rather rare cases when downtime can be anticipated, consider implementing it on the server side. If clients are aware of it as well, you can significantly reduce network traffic while improving system throughput and response times.Reference: Retry-After HTTP header in practice from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
software-development-2-logo

Continuous Integration, Delivery, Deployment and Maturity Model

Continuous Integration, Continuous Deployment, and Continuous Delivery are all related to each other, and feed into each other. Several articles have been written on these terms. This blog will attempt to explain these terms in an easy-to-understand manner. What is Continuous Integration? Continuous Integration (CI) is a software practice that require developers to commit their code to the main workspace, at least once, possibly several times a day. Its expected that the developers have run unit tests in their local environment before committing the source code. All developers in the team are following this methodology. The main workspace is checked out, typically after each commit, or possibly at a regular intervals, and then verified for any thing from build issues, integration testing, functional testing, performance, longevity, or any other sort of testing.The level of testing that is performed in CI can completely vary but the key fundamentals are that multiple integrations from different developers are done through out the day. The biggest advantage of following this approach is that if there are any errors then they are identified early in the cycle, typically soon after the commit. Finding the bugs closer to commit does make them much more easier to fix. This is explained well by Martin Fowler: Continuous Integrations doesn’t get rid of bugs, but it does make them dramatically easier to find and remove. There are lots of tools that provide CI capabilities. Most common ones are Jenkins from CloudBees, Travis CI, Go from ThoughtWorks, and Bamboo from Atlassian. What is Continuous Delivery? Continuous Delivery is the next logical step of Continuous Integration. It means that every change to the system, i.e. every commit, can be released for production at the push of a button. This means that every commit made to the workspace is a release candidate for production. This release however is still a manual process and require an explicit push of a button. This manual step may be essential because of business concerns such as slowing the rate of software deployment.At certain times, you may even push the software to production-like environment to obtain feedback. This allows to get a fast and automated feedback on production-readiness of your software with each commit. A very high degree of automated testing is an essential part to enable Continuous Delivery. Continuous Delivery is achieved by building Deployment Pipelines. This is best described in Continuous Delivery book by Jez Humble (@jezhumble). A deployment pipeline is an automated implementation of your application’s build, deploy, test, and release process. The actual implementation of the pipeline, tools used, and processes may differ but the fundamental concept of 100% automation is the key. What is Continuous Deployment? Continuous Deployment is often confused with Continuous Delivery. However it is the logical conclusion of Continuous Delivery where the release to production is completely automated. This means that every commit to the workspace is automatically released to production, and thus leading to several deployments of your software during a day.Continuous Delivery Maturity Model Maturity Models allow a team or organization to assess its methods and process against a clearly defined benchmark. As defined in Capability Maturity Model – The term “maturity” relates to the degree of formality and optimization of processes, from ad hoc practices, to formally defined steps, to managed result metrics, to active optimization of the processes. The model explains different stages and helps teams to improve by moving from a lower stage to a higher one. Several Continuous Delivery Maturity Models are available, such as InfoQ, UrbanCode, ThoughtWorks, Bekk, and others. Capability Maturity Model Integration (CMMI) is defined by Software Engineering Institute at Carnegie Mellon University.  CMMI-Dev particularly defines model that provides guidance for applying CMMI best practices in a development organization. It defines five maturity levels:Initial Managed Defined Quantitatively Managed OptimizingEach of these Continuous Delivery maturity models mentioned define their own maturity levels. For example, Base, Beginner, Intermediate, Advanced, Expert are used by InfoQ. Expert is changed to Extreme for UrbanCode. ThoughtWorks uses CMMI-Dev maturity levels but does not segregate them into different areas. Here is another attempt to the maturity model that picks the best pieces from each of those.As a team/organization, you need to look at where do you fit in this maturity model. And once you’ve identified that, more importantly, figure how do you get to the next level. For example, if your team does not have any data management or migration strategy then you are at “Initial” level in “Data Migration”. Your goal would be to move from Initial -> Managed -> Defined -> Quantitatively Managed -> Optimizing. The progression from one level to the next is not necessarily sequential. But any changes in the organization is typically met with an inertia and so these incremental changes serve as guideline to improve.Reference: Continuous Integration, Delivery, Deployment and Maturity Model from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close