Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

grails-logo

Fetching List of message codes from message.properties

Normally messages from message properties are fetched via, key i.e. message code, What if we want to select more than one message property, like a list. To get a list of select message codes from message.properties, we need to customize messageSource bean. To do that, lets create a class ‘CustomisedPluginAwareResourceBundleMessageSource’ which should extend class ‘PluginAwareResourceBundleMessageSource’. To fetch all properties we will use getMergedProperties().       import org.codehaus.groovy.grails.context.support.PluginAwareResourceBundleMessageSourceclass CustomisedPluginAwareResourceBundleMessageSource extends PluginAwareResourceBundleMessageSource { List listMessageCodes(Locale locale, String lookupMessageCode) { Properties properties = getMergedProperties(locale).properties List listOfCodes = [] properties.each { if (it.key.toString().matches(/^[\w.]*${lookupMessageCode}[.\w]*$/)) listOfCodes.add(it.key) } return listOfCodes } }Here, ‘listMessageCodes()’ takes two parameters, first is the ‘locale’ we are looking in and second is ‘string’ which we are searching for ; and it returns the list of codes which contains that string. Next we need to do is re-define ‘messageSource’ bean in the resources file: import com.ig.demoApp.CustomisedPluginAwareResourceBundleMessageSourcebeans = { messageSource(CustomisedPluginAwareResourceBundleMessageSource) { basenames = "WEB-INF/grails-app/i18n/messages" } }That’s it! All we need to do is call the method ‘listMessageCodes()’. Here is a small example. Mentioned below is a sample of message codes, in message.properties: fruit.orange.label=Orange fruit.black.label=Black Grape fruit.red.label=Red Apple fruit.green.label=Green Applesquare.black.label=Black Square square.yellow.label=yellow Square square.red.label=Pink Squarecircle.violet.label=Violet Circle circle.magenta.label=Magenta Circle circle.olive.label=Olive Circleand a controller like: package demoappclass DemoController {def messageSourcedef show() { [fruits: messageSource.listMessageCodes(request.locale, "fruit"), squares: messageSource.listMessageCodes(request.locale, "square"), circles: messageSource.listMessageCodes(request.locale, "circle"), blackColorItems:messageSource.listMessageCodes(request.locale, "black"), redColorItems:messageSource.listMessageCodes(request.locale, "red")] } }a gsp: <g:form> <p>Available Fruits</p> <g:each in="${fruits}" var="fruit"> <span> <input type="radio" name="fruit"> <label><g:message code="${fruit}"/></label> </span> </g:each><p>Available Squares</p> <g:each in="${squares}" var="square"> <span> <input type="radio" name="square"> <label><g:message code="${square}"/></label> </span> </g:each><p>Available Circles</p> <g:each in="${circles}" var="circle"> <span> <input type="radio" name="circle"> <label><g:message code="${circle}"/></label> </span> </g:each><p>Available Black Color Items</p> <g:each in="${blackColorItems}" var="blackColorItem"> <span> <input type="radio" name="blackColorItem"> <label><g:message code="${blackColorItem}"/></label> </span> </g:each><p>Available Red Color Items</p> <g:each in="${redColorItems}" var="redColorItem"> <span> <input type="radio" name="redColorItem"> <label><g:message code="${redColorItem}"/></label> </span> </g:each></g:form>That’s it :)You can also find the demo here....
software-development-2-logo

Thoughts about TDD and how to use it for untested legacy code

Prologue My personal experiences with TDD mostly match with the others on the internet, in short, TDD is good. It helps you to write better code, create a clean and nicely tested architecture, make refactoring and design changes easier. It leads your design decisions, helps to think through every possible cases which you need to handle and many more. I don’t want to repeat the same things which already have been written a hundred times, instead of I try to share some personal thoughts.     Disadvantages of TDD? Not everybody agree with my first statement, for example: http://programmers.stackexchange.com/a/98566 First it’s hard, make things slower, need to practice, yes, that’s true, but most of the complaints not related to TDD. For TDD, need to learn just a few important things, how to work in little steps and how to think upfront, latter is necessary anyway for a good design. The problem is, that people thinks TDD is an independent skill, it does not require any background, but that’s not true. Maybe you already heard this: “We must learn to walk before we can run.” Same with TDD, first need to learn how to write unit test at all, then write them in a TDD way. Most complaints are based on poorly written unit test, not on the TDD process itself. Like high coupling, testing implementation details, overmocking things etc. Those are the shortcomings of the developers, not TDD faults, but for most people, it’s hard to face it, so it’s easier to blame something/somebody else. When we know how to write good unit test, just need some practice for the next step, therefore I highly suggest to do some TDD kata, e.g.: http://osherove.com/tdd-kata-1/ Only for new code? After all of this, doing TDD on a new code is easy, so I want to say some word about the worst case, when need to make changes in legacy code, which doesn’t have any unit test. The first question, is it really related to TDD? We need to cover the old behavior, before writing the new one, so this won’t be driven by test, because the code is already written. Yes, that’s true. We are doing that to protect ourselves when we will implement the new functionality, but most importantly, it helps us to practice the same thought process which is necessary for doing TDD right. Refactoring, the TDD way The following technique is useful when the code is not too bad, e.g.: it’s not a god class, with 3000 line and 1 public method and everything created with ‘new’, inside the method bodies etc. The essence of the technique: always try to find the shortest branch and write a test for that. If the class has multiple public method, then find the shortest/simplest one. Then look for the shortest branch which has some behavior/decision point. The simplest form obviously is an “if-else”, e.g. if the “else” branch just set a variable or throw an exception, test that first, then go to the “if” part and find the shortest there. Ok, but what to do when there is no “if”? Depends on the code, but some suggestion. Does it have an early return point? Then force to use that branch. Does it have some loop? Can you force it to skip the loop(e.g. with an empty collection)? Then do that. Can you force it to throw an exception? Then do that. Use the same thinking when it call a method, try to find the shortest branch inside that method too and so on. How is this related to TDD? Doesn’t sound familiar?We write a test for an assumption and may or may not watch it fail, depends on missed some expectation or not We go forward in baby steps, always write test for the smallest possible piece of code. Of course this mean, we don’t build up the whole test environment in the beginning. I mean, if the class has 10 dependencies, then we don’t create mocks or build those classes in the beginning, just the first time when want to use them. We must be sure, that we don’t need to change the already written tests to introduce a new dependency, because until that point we didn’t have to use it. When we first use a new dependency, set some expectation, then write it in the current test. Next time when we write the same, then we should start to think to put it in the ‘setUp’ method, which mean, we refactor the test code in every iteration. And the best part of this, that we can estimate how to handle that particular expectation, because we see how many times it will be used in the code. If we see, that it’s a really big branch and need to set the same things in 80% of the tests, then we can put it in some setup method, but if that is only a short branch and need to use it only in 2-3 tests, then repeat it instead in every scenario. We build up the knowledge for the old code behavior through these safe, little steps, like when we build up the code, through TDD.So I think it’s some kind of reverse TDD technique :) It does the job quite well, so I highly recommend, when next time you facing with a similar code base, give it a try. ...
jsf-logo

RESTful Charts with JAX-RS and PrimeFaces

Oftentimes, it is useful to utilize a chart for providing a visual representation of your data. PrimeFaces supplies charting solutions that make it easy to add visual representations of your data into web and mobile applications. If we couple the use of PrimeFaces charting components with RESTful web service data, we can create custom charts that scale well for both desktop and mobile devices. In this post, I will update the Java EE 7 Hands On Lab MoviePlex application to provide a dashboard into which we can integrate PrimeFaces chart components. We’ll create one chart in this example, but you can utilize this post to help you build even more charts in a similar manner. Specifically, we will utilize a RESTful web service to glean movie theater capacity information, and we’ll display each of the theater capacities using a PrimeFaces Bar Chart. To begin, download the Java EE 7 Hands On Lab application solution archive, if you have not already done so.  From there, open it up within NetBeans IDE. To create this post, I am using NetBeans 8.0.2. Once the project has been imported into NetBeans, deploy it to your application server (GlassFish 4.1 in my case) by right-clicking on the project and choosing Run.  Once deployment is complete, open the theater web service within a browser by opening the following URL: http://localhost:8080/ExploringJavaEE7/webresources/theater/.  The web service should produce a listing that looks similar to that in Figure 1.We will utilize the data from this web service to feed our dashboard widget.  Let’s first create the backend code, and then we will tackle the UI.  First, create a new package named org.glassfish.movieplex7.jsf, by right-clicking on Source Packages, and selecting “New…”-> “Java Packages”.  Next, create a JSF Managed Bean controller by right-clicking on that package, and selecting “New…”-> “JSF Managed Bean”, and name it DashboardController.   Let’s annotate the controller as @SessionScoped, and then implement java.io.Serializable.  In this controller, we will obtain the data, and construct the model for the dashboard.  We will first query the web service utilizing a JAX-RS client, and we will utilize the data to populate list of Theater objects.  Therefore, we need to define the following four fields to begin: Client jaxRsClient; // Typically not hard coded...store in a properties file or database String baseUri = "http://localhost:8080/ExploringJavaEE7/webresources/theater/"; private List<Theater> theaterList; private BarChartModel theaterCapacityModel; The Client is of type javax.ws.rs.client.Client, and we will initialize the field within the class constructor by calling upon the javax.ws.rs.client.ClientBuilder, as follows: public DashboardController() { jaxRsClient = ClientBuilder.newClient(); } Next up, we need to create a method to load the data, create, and configure the model.  In our controller, the init() method basically contains an implementation of delegating tasks to other methods. The init() method implementation invokes two methods: loadData(), and createTheaterCapacityModel(). public void init() { loadData(); createTheaterCapacityModel(); } The code is written such that it will be easy to add more widgets to our dashboard at a later date, if desired. The loadData() method provides the implementation for loading the data from our web service into our local list. private void loadData() { theaterList = jaxRsClient.target(baseUri) .request("application/xml") .get(new GenericType>() { } ); } If we had more widgets, then we would add the data loading code for those data models into this method as well. Next, we need to initialize the org.primefaces.model.chart.BarChartModel that we had defined, and load it with the data from the web service. The initTheaterCapacityModel() method contains the implementation for creating the BarChartModel, and populating it with one or more ChartSeries objects to build the data. public BarChartModel initTheaterCapacityModel() {BarChartModel model = new BarChartModel();ChartSeries theaterCapacity = new ChartSeries(); theaterCapacity.setLabel("Capacities");for (Theater theater : theaterList) {theaterCapacity.set(theater.getId(), theater.getCapacity());} model.addSeries(theaterCapacity);return model; } As you can see, this model consists of a single  org.primefaces.model.chart.ChartSeries object. Actually, the model can contain more than a single ChartSeries object, and different colored bars will be used to display that data within the chart. In this case, we simply add the theater ID and the capacity for each Theater object to the ChartSeries object, and then we add that to the BarChartModel. The createTheaterCapacityModel() method  is invoked within our init() method, and in it we call upon the initTheaterCapacityModel() method for creation of the org.primefaces.model.chart.BarChartModel, and then configure it accordingly. private void createTheaterCapacityModel() { theaterCapacityModel = initTheaterCapacityModel();theaterCapacityModel.setTitle("Theater Capacity"); theaterCapacityModel.setLegendPosition("ne"); theaterCapacityModel.setBarPadding(3); theaterCapacityModel.setShadow(false);Axis xAxis = theaterCapacityModel.getAxis(AxisType.X); xAxis.setLabel("Theater");Axis yAxis = theaterCapacityModel.getAxis(AxisType.Y); yAxis.setLabel("Capacity"); yAxis.setMin(0); yAxis.setMax(200);} As you can see, inside of the method, we initialize the model by calling upon initTheaterCapacityModel(), and then we configure it via a series of “set” methods. Specifically, we set the title, position, and provide some visual configurations. Next, set up the axis by calling upon the model’s getAxis() method, and passing the X and Y axis constants. We then configure each axis to our liking by setting a label and min/max values for the Y axis.  See the full sources for the class at the end of this post. That does it for the server-side code, now let’s take a look at the UI code that is used to display the chart component. Begin by generating a new XHTML file at the root of the Web Pages folder in your project by right-clicking and choosing "New..."-> "XHTML...", and name the file dashboard.xhtml. The sources for dashboard.xhtml should contain the following: <html xmlns:f="http://xmlns.jcp.org/jsf/core" xmlns:h="http://xmlns.jcp.org/jsf/html" xmlns:p="http://primefaces.org/ui" xmlns="http://www.w3.org/1999/xhtml"> <h:head> <title>Theater Dashboard</title></h:head> <h:body> <f:event listener="#{dashboardController.initView}" type="preRenderView"/> <h:form id="theaterDash" prependid="false"> <p:growl id="growl" showdetail="true"/> <p:layout fullpage="true"> <p:layoutUnit position="center"> <p:panel header="Capacity for Theaters" id="theater_capacity" style="border: 0px;"> <p:chart model="#{dashboardController.theaterCapacityModel}" style="border: 0px; height: 200px; width: 500px;" type="bar"> </p:chart></p:panel> </p:layoutUnit> </p:layout><p:poll interval="60" listener="#{dashboardController.pollData}"/></h:form></h:body> </html> Fairly simplistic, the JSF view contains a PrimeFaces layout, including a panel and a chart.  Near the top of the view, an f:event tag is used to invoke the listener method which is implemented within the DashboardController class, identified by initView().  For the purposes of this example, the  p:chart tag is where the magic happens. The chart type in this case is set to “bar”, although other options are available (visit http://www.primefaces.org/showcase). The model is set to #{dashboardController.theaterCapacityModel}, which we defined, populated, and configured within the controller class. We then provide a width and a height to make the chart display nicely.  In case the data changes (I know theaters do not increase or decrease in size often, but go with me here), we added a PrimeFaces poll component invoke the pollData() method, which refreshes the data periodically.  In this case, the data will be refreshed every 60 seconds. When complete, the chart should look like that in Figure 2.The chart is interactive, and if you click on the label, the bars will become hidden.  This is handy if you have more than one category (via the ChartSeries).  You can even include a p:ajax tag within the chart component, and invoke an action when the chart is clicked on…perhaps a dialog will pop up to display some additional data on the item which is clicked. That does it…now you can create even more charts utilizing PrimeFaces and RESTful web services.  I suggest building upon the MoviePlex application to see what other possibilities can be had. Full Sources for the DashboardController class: package org.glassfish.movieplex7.jsf;import java.util.List; import javax.inject.Named; import javax.enterprise.context.SessionScoped; import javax.faces.component.UIComponent; import javax.faces.component.UIViewRoot; import javax.faces.context.FacesContext; import javax.ws.rs.client.Client; import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.core.GenericType; import org.glassfish.movieplex7.entities.Theater;import org.primefaces.model.chart.Axis; import org.primefaces.model.chart.AxisType; import org.primefaces.model.chart.BarChartModel; import org.primefaces.model.chart.ChartSeries;/** * * @author Juneau */ @Named(value = "dashboardController") @SessionScoped public class DashboardController implements java.io.Serializable { Client jaxRsClient; // Typically not hard coded...store in a properties file or database String baseUri = "http://localhost:8080/ExploringJavaEE7/webresources/theater/"; private List theaterList; private BarChartModel theaterCapacityModel;/** * Creates a new instance of FamisEquipPmChartController */ public DashboardController() { jaxRsClient = ClientBuilder.newClient(); } public void init() { loadData();createTheaterCapacityModel(); } /** * Initializes the view on page render...if we wish to grab a reference * to a panel, etc. */ public void initView(){ UIViewRoot viewRoot = FacesContext.getCurrentInstance().getViewRoot(); // Do something }public void pollData() { System.out.println("polling data..."); loadData(); }/** * JAX-RS client to poll the data */ private void loadData() { theaterList = jaxRsClient.target(baseUri) .request("application/xml") .get(new GenericType>() { } ); }/** * Initialize the Bar Chart Model for Displaying PM Estimated Hours by Month * * @return */ public BarChartModel initTheaterCapacityModel() {BarChartModel model = new BarChartModel();ChartSeries theaterCapacity = new ChartSeries(); theaterCapacity.setLabel("Capacities");for (Theater theater : theaterList) {theaterCapacity.set(theater.getId(), theater.getCapacity());} model.addSeries(theaterCapacity);return model; } private void createTheaterCapacityModel() { theaterCapacityModel = initTheaterCapacityModel();theaterCapacityModel.setTitle("Theater Capacity"); theaterCapacityModel.setLegendPosition("ne"); theaterCapacityModel.setBarPadding(3); theaterCapacityModel.setShadow(false);Axis xAxis = theaterCapacityModel.getAxis(AxisType.X); xAxis.setLabel("Theater");Axis yAxis = theaterCapacityModel.getAxis(AxisType.Y); yAxis.setLabel("Capacity"); yAxis.setMin(0); yAxis.setMax(200);}/** * @return the theaterCapacityModel */ public BarChartModel getTheaterCapacityModel() { return theaterCapacityModel; }/** * @param theaterCapacityModel the theaterCapacityModel to set */ public void setTheaterCapacityModel(BarChartModel theaterCapacityModel) { this.theaterCapacityModel = theaterCapacityModel; } }Reference: RESTful Charts with JAX-RS and PrimeFaces from our JCG partner Josh Juneau at the Josh’s Dev Blog – Java, Java EE, Jython, Oracle, and More… blog....
java-logo

Async abstractions using rx-java

One of the big benefits in using Rx-java for me has been the way the code looks exactly the same whether the underlying calls are synchronous or asynchronous and hence the title of this entry. Consider a very simple use case of a client code making three slow running calls and combines the results into a list:           String op1 = service1.operation(); String op2 = service2.operation(); String op3 = service3.operation(); Arrays.asList(op1, op2, op3) Since the calls are synchronous the time taken to do this would be additive. To simulate a slow call the following is the type of implementation in each of method calls: public String operation() { logger.info("Start: Executing slow task in Service 1"); Util.delay(7000); logger.info("End: Executing slow task in Service 1"); return "operation1" } So the first attempt at using rx-java with these implementations is to simply have these long running operations return the versatile type Observable, a bad implementation would look like this: public Observable<string> operation() { logger.info("Start: Executing slow task in Service 1"); Util.delay(7000); logger.info("End: Executing slow task in Service 1"); return Observable.just("operation 1"); } So with this the caller implementation changes to the following: Observable<String> op1 = service1.operation(); Observable<String> op2 = service2.operation(); Observable<String> op3 = service3.operation();Observable<List<String>> lst = Observable.merge(op1, op2, op3).toList(); See how the caller composes the results using the merge method. However the calls to each of the service calls is still synchronous at this point, to make the call asynch the service calls can be made to use a Thread pool, the following way: public class Service1 { private static final Logger logger = LoggerFactory.getLogger(Service1.class); public Observable<String> operation() { return Observable.<String>create(s -> { logger.info("Start: Executing slow task in Service 1"); Util.delay(7000); s.onNext("operation 1"); logger.info("End: Executing slow task in Service 1"); s.onCompleted(); }).subscribeOn(Schedulers.computation()); } } subscribeOn uses the specified Scheduler to run the actual operation. The beauty of the approach is that the calling code of this service is not changed at all, the implementation there remains exactly same as before whereas the service calls are now asynchronous. If you are interested in exploring this sample further, here is a github repo with working examples.Reference: Async abstractions using rx-java from our JCG partner Biju Kunjummen at the all and sundry blog....
gradle-logo

Dropwizard, MongoDB and Gradle Experimenting

Introduction I created a small project using Dropwizard, MongoDB and Gradle. It actually started as an experimenting Guava cache as buffer for sending counters to MongoDB (or any other DB). I wanted to try Gradle with MongoDB plugin as well. Next, I wanted to create some kind of interface to check this framework and I decided to try out DropWizard. And this is how this project was created. This post is not a tutorial of using any of the chosen technologies. It is a small showcase, which I did as an experimentation. I guess there are some flaws and maybe I am not using all “best practices”. However, I do believe that the project, with the help of this post, can be a good starting point for the different technologies I used. I also tried to show some design choices, which help achieving SRP, decoupling, cohesion etc. I decided to begin the post with the use-case description and how I implemented it. After that, I will explain what I did with Gradle, MongoDB (and embedded) and Dropwizard. Before I begin, here’s the source code:https://github.com/eyalgo/CountersBufferingThe Use-Case: Counters With Buffer We have some input requests into our servers. During the process of a request, we choose to “paint” it with some data (decided by some logic). Some requests will be painted by Value-1, some by Value-2, etc. Some will not be painted at all. We want to limit the number of painted requests (per paint value). In order to have limit, for each paint-value, we know the maximum, but also need to count (per paint value) the number of painted requests. As the system has several servers, the counters should be shared by all servers. The latency is crucial. Normally we get 4-5 milliseconds per request processing (for all the flow. Not just the painting). So we don’t want that increasing the counters will increase the latency. Instead, we’ll keep a buffer, the client will send ‘increase’ to the buffer. The buffer will periodically increase the repository with “bulk incremental”. I know it is possible to use directly Hazelcast or Couchbase or some other similar fast in-memory DB. But for our use-case, that was the best solution. The principle is simple:The dependent module will call a service to increase a counter for some key The implementation keeps a buffer of counters per key It is thread safe The writing happens in a separate thread Each write will do a bulk increaseBuffer For the buffer, I used Google Guava cache. Buffer Structure Creating the buffer: private final LoadingCache<Counterable, BufferValue> cache; ...this.cache = CacheBuilder.newBuilder() .maximumSize(bufferConfiguration.getMaximumSize()) .expireAfterWrite(bufferConfiguration.getExpireAfterWriteInSec(), TimeUnit.SECONDS) .expireAfterAccess(bufferConfiguration.getExpireAfterAccessInSec(), TimeUnit.SECONDS) .removalListener((notification) -> increaseCounter(notification)) .build(new BufferValueCacheLoader()); ... (Counterable is described below) BufferValueCacheLoader implements the interface CacheLoader. When we call increase (see below), we first get from the cache by key. If the key does not exist, the loader returns value. BufferValueCacheLoader: public class BufferValueCacheLoader extends CacheLoader<Counterable, BufferValue> { @Override public BufferValue load(Counterable key) { return new BufferValue(); } } BufferValue wraps an AtomicInteger (I would need to change it to Long at some point) Increase the Counter Increasing counter and sending if passed threshold: public void increase(Counterable key) { BufferValue meter = cache.getUnchecked(key); int currentValue = meter.increment(); if (currentValue > threashold) { if (meter.compareAndSet(currentValue, currentValue - threashold)) { increaseCounter(key, threashold); } } } When increasing a counter, we first get current value from cache (with the help of the loader. As descried above). The compareAndSet will atomically check if has same value (not modified by another thread). If so, it will update the value and return true. If success (returned true), the the buffer calls the updater. View the buffer After developing the service, I wanted a way to view the buffer. So I implemented the following method, which is used by the front-end layer (Dropwizard’s resource). Small example of Java 8 Stream and Lambda expression. Getting all counters in cache: return ImmutableMap.copyOf(cache.asMap()) .entrySet().stream() .collect( Collectors.toMap((entry) -> entry.getKey().toString(), (entry) -> entry.getValue().getValue())); MongoDB I chose MongoDB because of two reasons:We have similar implementation in our system, which we decided to use MongoDB there as well. Easy to use with embedded server.I tried to design the system so it’s possible to choose any other persist implementation and change it. I used morphia as the MongoDB client layer instead of using directly the Java client. With Morphia you create a dao, which is the connection to a MongoDB collection. You also declare a simple Java Bean (POJO), that represent a document in a collection. Once you have the dao, you can do operations on the collection the “Java way”, with fairly easy API. You can have queries and any other CRUD operations, and more. I had two operations: increasing counter and getting all counters. The services implementations do not extend Morphia’s BasicDAO, but instead have a class that inherits it. I used composition (over inheritance) because I wanted to have more behavior for both services. In order to be consistent with the key representation, and to hide the way it is implemented from the dependent code, I used an interface: Counterable with a single method: counterKey(). public interface Counterable { String counterKey(); } The DAO, which is a composition inside the services: final class MongoCountersDao extends BasicDAO<Counter, ObjectId> { MongoCountersDao(Datastore ds) { super(Counter.class, ds); } } Increasing the Counter MongoCountersUpdater extends AbstractCountersUpdater which implements CountersUpdater: @Override protected void increaseCounter(String key, int value) { Query<Counter> query = dao.createQuery(); query.criteria("id").equal(key); UpdateOperations<Counter> ops = dao.getDs().createUpdateOperations(Counter.class).inc("count", value); dao.getDs().update(query, ops, true); } Embedded MongoDB In order to run tests on the persistence layer, I wanted to use an in-memory database. There’s a MongoDB plugin for that. With this plugin you can run a server by just creating it on runtime, or run as goal in maven / task in Gradle.https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo https://github.com/sourcemuse/GradleMongoPluginEmbedded MongoDB on Gradle I will elaborate more on Gradle later, but here’s what I needed to do in order to set the embedded mongo. dependencies { // More dependencies here testCompile 'com.sourcemuse.gradle.plugin:gradle-mongo-plugin:0.4.0' } Setup Properties mongo { // logFilePath: The desired log file path (defaults to 'embedded-mongo.log') logging 'console' mongoVersion 'PRODUCTION' port 12345 // storageLocation: The directory location from where embedded Mongo will run, such as /tmp/storage (defaults to a java temp directory) } Embedded MongoDB Gradle TasksstartMongoDb will just start the server. It will run until stopping it. stopMongoDb will stop it. startManagedMongoDb test , two tasks, which will start the embedded server before the tests run. The server will shut down when the jvm finishes (the tests finish)Gradle Although I only touch the tip of the iceberg, I started seeing the strength of Gradle. It wasn’t even that hard setting up the project. Gradle Setup First, I created a Gradle project in eclipse (after installing the plugin). I needed to setup the dependencies. Very simple. Just like maven. One Big JAR Output When I want to create one big jar from all libraries in Maven, I use the shade plugin. I was looking for something similar, and found gradle-one-jar pluging. https://github.com/rholder/gradle-one-jar I added that plugin apply plugin: 'gradle-one-jar'. Added one-jar to classpath: buildscript { repositories { mavenCentral() } dependencies { classpath 'com.sourcemuse.gradle.plugin:gradle-mongo-plugin:0.4.0' classpath 'com.github.rholder:gradle-one-jar:1.0.4' } } And added a task: mainClassName = 'org.eyalgo.server.dropwizard.CountersBufferApplication' task oneJar(type: OneJar) { mainClass = mainClassName archiveName = 'counters.jar' mergeManifestFromJar = true } Those were the necessary actions I needed to do in order to make the application run. Dropwizard Dropwizard is a stack of libraries that makes it easy to create web servers quickly. It uses Jetty for HTTP and Jersey for REST. It has other mature libraries to create complicated services. It can be used as an easy developed microservice. As I explained in the introduction, I will not cover all of Dropwizard features and/or setup. There are plenty of sites for that. I will briefly cover the actions I did in order to make the application run. Gradle Run Task run { args 'server', './src/main/resources/config/counters.yml' } First argument is server. Second argument is the location of the configuration file. If you don’t give Dropwizard the first argument, you will get a nice error message of the possible options. positional arguments: {server,check} available commands I already showed how to create one jar in the Gradle section. Configuration In Dropwizard, you setup the application using a class that extends Configuration. The fields in the class should align to the properties in the yml configuration file. It is a good practice to put the properties in groups, based on their usage/responsibility. For example, I created a group for mongo parameters. In order for the configuration class to read the sub groups correctly, you need to create a class that align to the properties in the group. Then, in the main configuration, add this class as a member and mark it with annotation: @JsonProperty. Example: @JsonProperty("mongo") private MongoServicesFactory servicesFactory = new MongoServicesFactory(); @JsonProperty("buffer") private BufferConfiguration bufferConfiguration = new BufferConfiguration(); Example: Changing the Ports Here’s part of the configuration file that sets the ports for the application. server: adminMinThreads: 1 adminMaxThreads: 64 applicationConnectors: - type: http port: 9090 adminConnectors: - type: http port: 9091 Health Check Dropwizard gives basic admin API out of the box. I changed the port to 9091. I created a health check for MongoDB connection. You need to extend HealthCheck and implement check method. private final MongoClient mongo; ... protected Result check() throws Exception { try { mongo.getDatabaseNames(); return Result.healthy(); } catch (Exception e) { return Result.unhealthy("Cannot connect to " + mongo.getAllAddress()); } } Other feature are pretty much self-explanatory or simple as any getting started tutorial. Ideas for Enhancement The are some things I may try to add.Add tests to the Dropwizard section. This project started as PoC, so I, unlike usually, skipped the tests in the server part. Dropwizard has Testing Dropwizard, which I want to try. Different persistence implementation. (couchbase? Hazelcast?). Injection using Google Guice. And with help of that, inject different persistence implementation.That’s all. Hope that helps.Source code: https://github.com/eyalgo/CountersBufferingReference: Dropwizard, MongoDB and Gradle Experimenting from our JCG partner Eyal Golan at the Learning and Improving as a Craftsman Developer blog....
software-development-2-logo

Resolve coreference using Stanford CoreNLP

Coreference resolution is the task of finding all expressions that refer to the same entity in a text. Stanford CoreNLP coreference resolution system is the state-of-the-art system to resolve coreference in the text. To use the system, we usually create a pipeline, which requires tokenization, sentence splitting, part-of-speech tagging, lemmarization, named entity recoginition, and parsing. However sometimes, we use others tools for preprocessing, particulaly when we are working on a specific domain. In these cases, we need a stand-alone coreference resolution system. This post demenstrates how to create such a system using Stanford CoreNLP.     Load properties In general, we can just create an empty Properties, because the Stanford CoreNLP tool can automatically load the default one in the model jar file, which is under edu.stanford.nlp.pipeline. In other cases, we would like to use specific properties. The following code shows one example of loading the property file from the working directory. private static final String PROPS_SUFFIX = ".properties";private Properties loadProperties(String name) { return loadProperties(name, Thread.currentThread().getContextClassLoader()); } private Properties loadProperties(String name, ClassLoader loader) { if (name.endsWith(PROPS_SUFFIX)) name = name.substring(0, name.length() - PROPS_SUFFIX.length()); name = name.replace('.', '/'); name += PROPS_SUFFIX; Properties result = null;// Returns null on lookup failures System.err.println("Searching for resource: " + name); InputStream in = loader.getResourceAsStream(name); try { if (in != null) { InputStreamReader reader = new InputStreamReader(in, "utf-8"); result = new Properties(); result.load(reader); // Can throw IOException } } catch (IOException e) { result = null; } finally { IOUtils.closeIgnoringExceptions(in); }return result; } Initialize the system After getting the properties, we can initialize the coreference resovling system. For example, try { corefSystem = new SieveCoreferenceSystem(new Properties()); mentionExtractor = new MentionExtractor(corefSystem.dictionaries(), corefSystem.semantics()); } catch (Exception e) { System.err.println("ERROR: cannot create DeterministicCorefAnnotator!"); e.printStackTrace(); throw new RuntimeException(e); } Annotation To feed the resolving system, we first need to understand the structure of annotation, which represents a span of text in a document. This is the most tricky part in this post, because to my knowledge there is no document to explain it in details. The Annotation class itself is just an implementation of Map. Basically, an annotation contains a sequence of sentences (which is another map). For each sentence, we need to provide a seqence of tokens (a list of CoreLabel), the parsing tree (Tree), and the dependency graph (SemanticGraph). Annotation CoreAnnotations.SentencesAnnotation -> sentences CoreAnnotations.TokensAnnotation -> tokens TreeCoreAnnotations.TreeAnnotation -> Tree SemanticGraphCoreAnnotations.CollapsedDependenciesAnnotation -> SemanticGraph Tokens The sequence of tokens represents the text of one sentence. Each token is an instance of CoreLabel, which stores word, tag (part-of-speech), lemma, named entity, normailzied named entity, etc. List<CoreLabel> tokens = new ArrayList<>(); for(int i=0; i<n; i++) { // create a token CoreLabel token = new CoreLabel(); token.setWord(word); token.setTag(tag); token.setNer(ner); ... tokens.add(token); } ann.set(TokensAnnotation.class, tokens); Parse tree A parse tree is an instance of Tree. If you use the Penn treebank style, Stanford corenlp tool provide an easy to parse the format. Tree tree = Tree.valueOf(getText()); ann.set(TreeAnnotation.class, tree); Semantic graph Semantic graph can be created using typed dependencis from the tree by rules. However the code is not that straightforward. GrammaticalStructureFactory grammaticalStructureFactory = new EnglishGrammaticalStructureFactory(); GrammaticalStructure gs = grammaticalStructureFactory .newGrammaticalStructure(tree); SemanticGraph semanticGraph = new SemanticGraph(gs.typedDependenciesCollapsed()); Please note that Stanford Corenlp provide different types of dependencies. Among others, coreference system needs “collapsed-dependencies”, so to set the annotation, you may write ann.set( CollapsedDependenciesAnnotation.class, new SemanticGraph(gs.typedDependenciesCollapsed())); Resolve coreference At last, you can feed the system with the annotation. The following code is one example. It is a bit long but easy to understand. private void annotate(Annotation annotation) { try { List<Tree> trees = new ArrayList<Tree>(); List<List<CoreLabel>> sentences = new ArrayList<List<CoreLabel>>();// extract trees and sentence words // we are only supporting the new annotation standard for this Annotator! if (annotation.containsKey(CoreAnnotations.SentencesAnnotation.class)) { // int sentNum = 0; for (CoreMap sentence : annotation .get(CoreAnnotations.SentencesAnnotation.class)) { List<CoreLabel> tokens = sentence .get(CoreAnnotations.TokensAnnotation.class); sentences.add(tokens); Tree tree = sentence.get(TreeCoreAnnotations.TreeAnnotation.class); trees.add(tree); MentionExtractor.mergeLabels(tree, tokens); MentionExtractor.initializeUtterance(tokens); } } else { System.err .println("ERROR: this coreference resolution system requires SentencesAnnotation!"); return; }// extract all possible mentions // this is created for each new annotation because it is not threadsafe RuleBasedCorefMentionFinder finder = new RuleBasedCorefMentionFinder(); List<List<Mention>> allUnprocessedMentions = finder .extractPredictedMentions(annotation, 0, corefSystem.dictionaries());// add the relevant info to mentions and order them for coref Document document = mentionExtractor.arrange( annotation, sentences, trees, allUnprocessedMentions); List<List<Mention>> orderedMentions = document.getOrderedMentions(); if (VERBOSE) { for (int i = 0; i < orderedMentions.size(); i++) { System.err.printf("Mentions in sentence #%d:\n", i); for (int j = 0; j < orderedMentions.get(i).size(); j++) { System.err.println("\tMention #" + j + ": " + orderedMentions.get(i).get(j).spanToString()); } } }Map<Integer, CorefChain> result = corefSystem.coref(document); annotation.set(CorefCoreAnnotations.CorefChainAnnotation.class, result);// for backward compatibility if (OLD_FORMAT) { List<Pair<IntTuple, IntTuple>> links = SieveCoreferenceSystem .getLinks(result);if (VERBOSE) { System.err.printf("Found %d coreference links:\n", links.size()); for (Pair<IntTuple, IntTuple> link : links) { System.err.printf( "LINK (%d, %d) -> (%d, %d)\n", link.first.get(0), link.first.get(1), link.second.get(0), link.second.get(1)); } }// // save the coref output as CorefGraphAnnotation //// cdm 2013: this block didn't seem to be doing anything needed.... // List<List<CoreLabel>> sents = new ArrayList<List<CoreLabel>>(); // for (CoreMap sentence: // annotation.get(CoreAnnotations.SentencesAnnotation.class)) { // List<CoreLabel> tokens = // sentence.get(CoreAnnotations.TokensAnnotation.class); // sents.add(tokens); // }// this graph is stored in CorefGraphAnnotation -- the raw links found // by the coref system List<Pair<IntTuple, IntTuple>> graph = new ArrayList<Pair<IntTuple, IntTuple>>();for (Pair<IntTuple, IntTuple> link : links) { // // Note: all offsets in the graph start at 1 (not at 0!) // we do this for consistency reasons, as indices for syntactic // dependencies start at 1 // int srcSent = link.first.get(0); int srcTok = orderedMentions.get(srcSent - 1).get( link.first.get(1) - 1).headIndex + 1; int dstSent = link.second.get(0); int dstTok = orderedMentions.get(dstSent - 1).get( link.second.get(1) - 1).headIndex + 1; IntTuple dst = new IntTuple(2); dst.set(0, dstSent); dst.set(1, dstTok); IntTuple src = new IntTuple(2); src.set(0, srcSent); src.set(1, srcTok); graph.add(new Pair<IntTuple, IntTuple>(src, dst)); } annotation.set(CorefCoreAnnotations.CorefGraphAnnotation.class, graph);for (CorefChain corefChain : result.values()) { if (corefChain.getMentionsInTextualOrder().size() < 2) continue; Set<CoreLabel> coreferentTokens = Generics.newHashSet(); for (CorefMention mention : corefChain.getMentionsInTextualOrder()) { CoreMap sentence = annotation.get( CoreAnnotations.SentencesAnnotation.class).get( mention.sentNum - 1); CoreLabel token = sentence.get( CoreAnnotations.TokensAnnotation.class).get( mention.headIndex - 1); coreferentTokens.add(token); } for (CoreLabel token : coreferentTokens) { token.set( CorefCoreAnnotations.CorefClusterAnnotation.class, coreferentTokens); } } } } catch (RuntimeException e) { throw e; } catch (Exception e) { throw new RuntimeException(e); } }Reference: Resolve coreference using Stanford CoreNLP from our JCG partner Yifan Peng at the PGuru blog....
java-logo

Pass Streams Instead of Lists

Opening disclaimer: this isn’t always a good idea. I’ll present the idea, along with some of the reasons why it’s a good idea, but then I’ll talk about some instances where it’s not so great. Being Lazy As you may know, I’ve been dabbling in Python nearly as much as I’ve been working with Java. One thing that I’ve liked about Python as soon I found out about it is generators. They allow for lazy operations on collections, so you can pass iterators/generators around until you finally actually need the final result of the operations – without affecting the original collection (under most circumstances; but you’re not likely to affect it accidentally). I really enjoy the power of this idea. The laziness allows you to not do practically any work until the results are needed, and it also makes it so there isn’t useless memory used to store intermediate collections. Being Lazy in Java Java has iterators too, but not generators. But it does have something that works fairly similarly when it comes to lazy operations on collections: Streams. While not quite as versatile as generators in Python, Streams can largely be used the same way. Passing Streams Around There are a lot of cases where you should return Streams instead of the resulting Lists (or other collections). This does something for you, even besides the lazy benefits mentioned above. If the receiver of the returned object wants to collect() it into something other than the List you had planned on returning, or they want to reduce() it in a way you never expected, you can give them a Stream and have nothing to worry about. They can then get what they need with a Stream method call or two. What Sucks About This There is a problem that can be difficult to deal with when it comes to Streams being passed around like they’re collections: They’re one-time-use-only. This means that if a function such as the one below wants to use a Stream instead of a List, it can’t do it easily, since it needs to do two separate things with the List. public static List normalize(List input) { int total = input.stream() .mapToInt(i -> i) .sum();return input.stream() .map(i -> i * 100 / total) .collect(Collectors.toList()); } In order to take in a Stream instead, you need to collect() it, then run the two operations on it. public static Stream normalize(Stream input) { List inputList = input.collect(Collectors.toList());int total = inputList.stream() .mapToInt(i -> i) .sum();return inputList.stream() .map(i -> i * 100 / total); } This slightly defeats the purpose of passing the Streams around. It’s not horrible, since we’re trying to use a “final” result of the Stream. Except that it’s not a final result. It’s an intermediate result that is used to calculate the next Stream output. It creates an intermediate collection which wastes memory. There are ways around this, akin to how this “article” solves it, but they’re either complicated to implement or prone to user errors. I guess it’s kind of okay to just use the second method I showed you, since it’s still likely a pretty good performance boost over how the first one did it, but it just bugs me. Interesting (But Probably A Little Silly) Alternative If you’re familiar with my posts, you may feel like this article is against an article I had written a while back about transforming collections using decorators. Technically, this post does think of that as a rather naive idea, especially since the idea was inspired by Streams. But, there is one major benefit to the decorator idea over the Streams idea presented in this article: you can iterate over the decorated collections over and over again. It’s probably not as efficient as Streams – especially since I’m not sure how to parallelize it – but the it certainly has reusability going for it. There’s a chance I’ll look into the idea again and see if I can figure out a better way to do it, but I doubt it. Outro So, that’s my idea. You can take it or leave it. I’m not sure how often this can be useful in typical projects, but I think I’m going to give it a try in my current and future projects. Thanks for reading. If you’ve got an opinion about this, comment below and let me know.Reference: Pass Streams Instead of Lists from our JCG partner Jacob Zimmerman at the Programming Ideas With Jake blog....
java-interview-questions-answers

Simplifying JAX-RS caching with CDI

This post explains (via a simple example) how you can use CDI Producers to make it a little easier to leverage cache control semantics in your RESTful services The Cache-Control header was added in HTTP 1.1 as a much needed improvement over the Expires header available in HTTP 1.0. RESTful web services can make use of this header in order to scale their applications and make them more efficient e.g. if you can cache a response of a previous request, then you obviously need not make the same request to the server again if you are certain of the fact that your cached data is not stale!     How does JAX-RS help ? JAX-RS has had support for the Cache-Control header since its initial (1.0) version. The CacheControl class represents the real world Cache-Control HTTP header and provides the ability to configure the header via simple setter methods. More on the CacheControl class in the JAX-RS 2.0 javadocsSo how to I use the CacheControl class? Just return a Response object around which you can wrap an instance of the CacheControl class. @Path("/testcache") public class RESTfulResource { @GET @Produces("text/plain") public Response find(){ CacheControl cc = new CacheControl(); cc.setMaxAge(20); return Response.ok(UUID.randomUUID().toString()).cacheControl(cc).build(); } } Although this is relatively convenient for a single method, repeatedly creating and returning CacheControl objects can get irritating for multiple methods CDI Producers to the rescue! CDI Producers can help inject instances of classes which are not technically beans (as per the strict definition) or for classes over which you do not have control as far as decorating them with scopes and qualifiers are concerned. The idea is toHave a custom annotation (@CacheControlConfig) to define default values for Cache-Control header and allow for flexibility in case you want to override it @Retention(RUNTIME) @Target({FIELD, PARAMETER}) public @interface CachControlConfig { public boolean isPrivate() default true; public boolean noCache() default false; public boolean noStore() default false; public boolean noTransform() default true; public boolean mustRevalidate() default true; public boolean proxyRevalidate() default false; public int maxAge() default 0; public int sMaxAge() default 0;}Just use a CDI Producer to create an instance of the CacheControl class by using the InjectionPoint object (injected with pleasure by CDI !) depending upon the annotation parameters public class CacheControlFactory {@Produces public CacheControl get(InjectionPoint ip) {CachControlConfig ccConfig = ip.getAnnotated().getAnnotation(CachControlConfig.class); CacheControl cc = null; if (ccConfig != null) { cc = new CacheControl(); cc.setMaxAge(ccConfig.maxAge()); cc.setMustRevalidate(ccConfig.mustRevalidate()); cc.setNoCache(ccConfig.noCache()); cc.setNoStore(ccConfig.noStore()); cc.setNoTransform(ccConfig.noTransform()); cc.setPrivate(ccConfig.isPrivate()); cc.setProxyRevalidate(ccConfig.proxyRevalidate()); cc.setSMaxAge(ccConfig.sMaxAge()); }return cc; } }Just inject the CacheControl instance in your REST resource class and use it in your methods @Path("/testcache") public class RESTfulResource { @Inject @CachControlConfig(maxAge = 20) CacheControl cc;@GET @Produces("text/plain") public Response find() { return Response.ok(UUID.randomUUID().toString()).cacheControl(cc).build(); } }Additional thoughtsIn this case, the scope of the produced CacheControl instance is @Dependent i.e. it will live and die with the class which has injected it. In this case, the JAX-RS resource itself is RequestScoped (by default) since the JAX-RS container creates a new instance for each client request, hence a new instance of the injected CacheControl instance will be created along with each HTTP request You can also introduce CDI qualifiers to further narrow the scopes and account for corner cases You might think that the same can be achieved using a JAX-RS filter. That is correct. But you would need to set the Cache-Control header manually (within a mutable MultivaluedMap) and the logic will not be flexible enough to account for different Cache-Control configurations for different scenariosResults of the experiment Use NetBeans IDE to play with this example (recommended)Deploy the WAR and browse to http://localhost:8080/JAX-RS-Caching-CDI/testcache A random string which would be cached for 20 seconds (as per configuration via the @CacheControl annotation)A GET Request to the same URL will not result in an invocation of your server side REST service. The browser will return the cached value.Although the code is simple, if you are feeling lazy, you can grab the (maven) project from here and play around Have fun!Reference: Simplifying JAX-RS caching with CDI from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
java-logo

Starting out with jHiccup

After writing my post on ‘How to detect and diagnose slow code in production’ I was encouraged by a reader to try out jHiccup from Azul systems. Last year I went to a talk by jHiccup’s creator Gil Tene on the correct way to measure latency, where, amongst other things, he introduced us to jHiccup. It had been on my todo list of products to investigate and this gave me the impetus to finally get on with my investigation. JHiccup measures the system latency over and above your actual program code. For examples GC times and other OS and hardware events that add latency spikes to the smooth running of your program.  It is critical to understand these because your program can never have better latencies than the underlying environment on which it is running. To cut a long story short, I love the tool and think that it’s going to be really useful to me now that I’ve started using it.  This post is not about teaching you everything about jHiccup, I refer you to the documentation for that. This post is a ‘starting out with jHiccup guide’, to show you how I got it running and hopefully whet your appetite to try it out in your own code. Step 1: Download product Download the code, it’s open source and you can get it from here. Unzip the file and you will see the jHiccup.jar which we will use in the next step. Step 2: Run your program with jHiccup There are a number of way for running jHiccup but this is how I did it.  All you need to do is add this vm option to your command line: -javaagent:jHiccup.jar="-d 0 -i 1000 -l hiccuplog -c" There are lots of configurations, the ones chosen here mean:-d The delay before which to start recording latencies – this allow for ignoring any code warm up period. (default after 30s) -i Interval data, how often the data is recorded. (default every 5s) -l The name of the log file to which data is recorded. -c Launches a control JVM and records data to logFile.c.  Super useful to compare against actual program to see if the was a global event on the machine.Step 3: Process the log file Run this command on your log file (you can process both the program log file and the .c control log file). jHiccupLogProcessor -i hiccuplog -o myhlog This produces two files, we are interesting in the one call myhlog (not myhlog.hgram) which we will use in the final step. Step 4: Produce graph in Excel Now for the really nice bit.  Open the spreadsheet jHiccupPlotter.xls and make sure you enable the macros. You will see a sheet like this:Just select your file from step 3 and choose a chart title (this is a really useful feature when you come to comparing your graphs) and in a matter of seconds you will have a your latency distribution graphs. Example I had a program (not particularly latency sensitive) and wanted to understand the effects that the different garbage collects had on latency. All I had to do was to run my program with different garbage collector settings and compare the graphs. Of course this was a slightly manufactured example I happened to have to hand but you get the point, it’s easy to change jvm settings or code and get comparable results. The control program is also critical to understand what else is happening on your machine that might be affecting the latency of your program. These are my results: It’s interesting to see how the different GCs effect the latency and this is demonstrated beautifully using jHiccup. Using the serial collector:Using the G1 collector:Using the G1 collector – max pause set to 1ms:Using the CMS collector:Using the parallel GC:Reference: Starting out with jHiccup from our JCG partner Daniel Shaya at the Rational Java blog....
mongodb-logo

Using Go to build a REST service on top of mongoDB

I’ve been following go (go-lang) for a while now and finally had some time to experiment with it a bit more. In this post we’ll create a simple HTTP server that uses mongoDB as a backend and provides a very basic REST API. In the rest of this article I assume you’ve got a go environment setup and working. If not, look at the go-lang website for instructions (https://golang.org/doc/install). Before we get started we need to get the mongo drivers for go. In a console just type the following:       go get gopkg.in/mgo.v2 This will install the necessary libraries so we can access mongoDB from our go code. We also need some data to experiment with. We’ll use the same setup as we did in my previous article (http://www.smartjava.org/content/building-rest-service-scala-akka-http-a…). Loading data into MongoDB We use some stock related information which you can download from here (http://jsonstudio.com/wp-content/uploads/2014/02/stocks.zip). You can easily do this by executing the following steps: First get the data: wget http://jsonstudio.com/wp-content/uploads/2014/02/stocks.zip Start mongodb in a different terminal mongod --dbpath ./data/ And finally use mongoimport to import the data unzip -c stocks.zip | mongoimport --db akka --collection stocks --jsonArray And as a quick check run a query to see if everything works: jos@Joss-MacBook-Pro.local:~$ mongo akka MongoDB shell version: 2.4.8 connecting to: akka > db.stocks.findOne({},{Company: 1, Country: 1, Ticker:1 } ) { "_id" : ObjectId("52853800bb1177ca391c17ff"), "Ticker" : "A", "Country" : "USA", "Company" : "Agilent Technologies Inc." } > At this point we have our test data and can start creating our go based HTTP server. You can find the complete code in a Gist here: https://gist.github.com/josdirksen/071f26a736eca26d7ea4 In the following section we’ll look at various parts of this Gist to explain how to setup a go based HTTP server. The main function When you run a go application, go will look for the main function. For our server this main function looks like this: func main() {server := http.Server{ Addr: ":8000", Handler: NewHandler(), }// start listening fmt.Println("Started server 2") server.ListenAndServe()} This will configure a server to run on port 8000, and any request that comes in will be handled by the NewHandler() instance we create in line 64. We start the server by calling the server.listenAndServe() function. Now lets look at our handler that will respond to requests. The myHandler struct Lets first look at what this handler looks like: // Constructor for the server handlers func NewHandler() *myHandler { h := new(myHandler) h.defineMappings()return h }// Definition of this struct type myHandler struct { // holds the mapping mux map[string]func(http.ResponseWriter, *http.Request) }// functions defined on struct func (my *myHandler) defineMappings() {session, err := mgo.Dial("localhost") if err != nil { panic(err) }// make the mux my.mux = make(map[string]func(http.ResponseWriter, *http.Request))// matching of request path my.mux["/hello"] = requestHandler1 my.mux["/get"] = my.wrap(requestHandler2, session) }// returns a function so that we can use the normal mux functionality and pass in a shared mongo session func (my *myHandler) wrap(target func(http.ResponseWriter, *http.Request, *mgo.Session), mongoSession *mgo.Session) func(http.ResponseWriter, *http.Request) { return func(resp http.ResponseWriter, req *http.Request) { target(resp, req, mongoSession) } }// implements serveHTTP so this struct can act as a http server func (my *myHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { if h, ok := my.mux[r.URL.String()]; ok { // handle paths that are found h(w, r) return } else { // handle unhandled paths io.WriteString(w, "My server: "+r.URL.String()) } } Lets split this up and look at the various parts. The first thing we do is define a constructor: func NewHandler() *myHandler { h := new(myHandler) h.defineMappings()return h }When we call this constructor this will instantiate a myHandler type and call the defineMappings() function. After that it will return the instance we created. How does the type look we instantiate: type myHandler struct { // holds the mapping mux map[string]func(http.ResponseWriter, *http.Request) } As you can we define a struct with a mux variable as a map. This map will contain our mapping between a request path and a function that can handle the request. In the constructor we also called the defineMappings function. This funtion, which is defined on out myHandler struct, looks like this: func (my *myHandler) defineMappings() {session, err := mgo.Dial("localhost") if err != nil { panic(err) }// make the mux my.mux = make(map[string]func(http.ResponseWriter, *http.Request))// matching of request path my.mux["/hello"] = requestHandler1 my.mux["/get"] = my.wrap(requestHandler2, session) } In this (badly named) function we define the mapping between incoming requests and a specific function that handles the request. And in this function we also create a session to mongoDB using the mgo.Dial function. As you can see we define the requestHandlers in two different ways. The handler for “/hello” directly points to a function, an the handler for the “/get” path, points to a wrap function which is also defined on the myHandler struct: func (my *myHandler) wrap(target func(http.ResponseWriter, *http.Request, *mgo.Session), mongoSession *mgo.Session) func(http.ResponseWriter, *http.Request) { return func(resp http.ResponseWriter, req *http.Request) { target(resp, req, mongoSession) } } This is a function, which returns a function. The reason we do this, is that we also want to pass our mongo session into the request handler. So we create a custom wrapper function, which has the correct signature, and just pass each call to a function where we also provide the mongo session. (Note that we also could have changed the ServeHTTP implementation we explain below) Finally we define the ServeHTTP function on our struct. This function is called whenever we receive a request: func (my *myHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { if h, ok := my.mux[r.URL.String()]; ok { // handle paths that are found h(w, r) return } else { // handle unhandled paths io.WriteString(w, "My server: "+r.URL.String()) } } In this function we check whether we have a match in our mux variable. If we do, we call the configured handle function. If not, we just respond with a simple String. The request handle functions Lets start with the handle function which handles the “/hello” path: func requestHandler1(w http.ResponseWriter, r *http.Request) { io.WriteString(w, "Hello world!") }Couldn’t be easier. We simply write a specific string as HTTP response. The “/get” path is more interesting: func requestHandler2(w http.ResponseWriter, r *http.Request, mongoSession *mgo.Session) { c1 := make(chan string) c2 := make(chan string) c3 := make(chan string)go query("AAPL", mongoSession, c1) go query("GOOG", mongoSession, c2) go query("MSFT", mongoSession, c3)select { case data := <-c1: io.WriteString(w, data) case data := <-c2: io.WriteString(w, data) case data := <-c3: io.WriteString(w, data) }}// runs a query against mongodb func query(ticker string, mongoSession *mgo.Session, c chan string) { sessionCopy := mongoSession.Copy() defer sessionCopy.Close() collection := sessionCopy.DB("akka").C("stocks") var result bson.M collection.Find(bson.M{"Ticker": ticker}).One(&result)asString, _ := json.MarshalIndent(result, "", " ")amt := time.Duration(rand.Intn(120)) time.Sleep(time.Millisecond * amt) c <- string(asString) } What we do here is that we make use of the channel functionality of go to run three queries at the same time. We get the ticker information for AAPL, GOOG and MSFT and return a result to the specific channel. When we receive a response on one of the channels we return that response. So each time we call this service we either get the results for AAPL, for GOOG or for MSFT. With that we conclude this first step into go-lang :)Reference: Using Go to build a REST service on top of mongoDB from our JCG partner Jos Dirksen at the Smart Java blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close