Featured FREE Whitepapers

What's New Here?


Independent Contracting: How to Get There

The concept of self-employment is appealing for many technologists, but the path to getting there isn’t always clear. Independent contractors may cite several attributes of their work that they find preferable to traditional employer/employee relationships. The allure of money is obvious, but independents may also be drawn to project variety, an emphasis on new development over maintaining existing projects, and additional control over work/life balance. Independent contracting has historically been a trend primarily among the more senior and advanced in the profession, but today it’s not uncommon to hear intermediate or even junior level developers in pursuit of independent work. Often the decision to become an independent isn’t premeditated so much as it is circumstantial. A lucrative contract offer is hard to refuse, and once a new level of compensation and freedom is reached it is difficult to accept a return to lower salaries. These ‘accidental contractors’ sometimes find themselves wholly unprepared for what skills and knowledge are required to build and operate a sustainable business. Many seem to think that technical strength is the difference. Technical skills are clearly part of the equation and in unique situations superior skills can trump everything else, but many strong developers have failed as independents. Those that are thinking about exploring the independent contractor lifestyle in the future should start considering the topics below well before signing any contracts. Communication skills – Independents need the ability to acquire clients, either through direct interaction with the client or through a broker/recruiter. Once a client is made, maintaining their business will depend upon clear communications regarding expectations, schedules, delivery, etc. Using brokers can help those with communication or social skills issues. Varied skill set / independent learning / research – A skills inventory with some variety (languages, tools, frameworks) is fairly common with successful contractors, although advertising an unrealistic variety will hurt credibility. Independents have more incentive to invest off-hours learning new skills and keeping an eye on trends in the industry. Comfort with interview process – For those in the salaried employment world, one great interview can result in years of work. Depending on contract durations, independents can find themselves in some form of interview several times a year. Anyone hoping to be successful in contracting must overcome discomfort or anxiety in interview settings. Relationships – Successful independents usually know (and are known to) a number of people spread across various employers. Senior level contractors may have developed hundreds of relationships over time without any targeted networking efforts, while younger entrants will likely need to reach out to strangers. A lack of professional contacts is a barrier to entry for junior technologists and will negatively impact sustainability for senior contractors. Basic sales - Advanced sales skills are unnecessary, but an understanding of what a close is and learning a few different ways to close will be helpful. Basic marketing, brand management – Contractors have a brand, though many don’t think of it that way. Independents should pay attention to making their brand more attractive. Speaking engagements, tech community leadership, and publication/blogging are a few ways to increase visibility and potentially become a recognized authority. Focus on billing – Independents become frustrated when they realize that running their small business is more than just writing code. Taxes, insurance, contract review, time sheets, invoices, and expense reporting eat into time that would be better spent as billable hours. Successful contractors try to maximize billable time and often outsource (or automate) tasks when possible. Negotiation – When using a broker it is customary that they handle negotiation with the hiring entity, but an independent will still need to negotiate rates with the broker. A sum as small as a few dollars per hour quickly adds up over long client engagements. Negotiations for salaried positions can be easier due to the number of components that a company may be willing to adjust (salary, bonus, stock, etc.), but for contractors the negotiation is almost always a single figure.Reference: Independent Contracting: How to Get There from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Tiny, Tiny Steps – Experience Report Developing A Feature In Minimal Value-Adding Increments

A post for those who want to see what an iterative, MVP-driven development of a feature looks like. Once upon time, there was a webshop portal with hundreds of partner webshops displayed on the front page. Potential users wanted to find out if their favorite webshops or a particular type of goods were available, existing users wanted to find a shop quickly. Therefore it was decided to implement search. But how to do that?         Alternative 1: The waterfall search We have immediatelly switched on our engineering brains and started looking at Solr/Lucene and the APIs of the webshops to be able to import their goods catalogues so that our users could find everything from one place. Luckily enough, we havent went far along this path. It would certainly take us weeks while the users would still have been stuck with the same unusable, unsearchable grid of webshops with only few broad categories to help them. Alternative 2: Minimal viable feature growing iteratively After the original diversion, I have focused on getting value to the users ASAP. Thus the development was as follows:Purely client-side search (or rather filtering) in webshop names, a case-insensitive matching of a substring Addition of a library that can do fuzzy matching so that even misspelled names are found Addition of keyword search – for each webshop we had tens of keywords such as “shoes” or “sport” so I transferred those to the browser as well and started to search them too. (The display of the results has evolved accordingly. I have also introduced lazy loading of these data not to impact initial load time.) I’ve moved the search to the server-side to save clients from having to fetch so much data and thus increase load size, especially on mobile devices. This also opened possibilities for searching other sources later on. That is where we stopped for the time being. (Thanks to using ClojureScript at frontend and Clojure at backend, this was mostly copy & paste.)The good thing was that every few days we could have delivered increased value to the users. And I was also able to test the search on a frined (thank you, Fredrik!) early in the process and improve the UI considerably. (Weak search with good UI/UX may well beat great search with terrible one, it turns out.) What could have been improved It wasn’t perfect though:We actually did not deploy the individual iterations to the users for other reasons (but we could!) We had no monitoring in place and thus couldn’t know whether users used it (and thus that it was worthwile to develop it further) and how they used it or failed to use it. I’d have loved to see the impact on our conversion rates and active user base.Conclusion I love iterative development, doing the minimal thing possible to push value to users and get real feedback on it ASAP. In this case it turned out that a far simpler solution than originally envisioned, developed in few days, was sufficient. Win win.Reference: Tiny, Tiny Steps – Experience Report Developing A Feature In Minimal Value-Adding Increments from our JCG partner Jakub Holy at the The Holy Java blog....

Akka Notes – Actor Supervision – 8

Failures are more like a feature among distributed systems. And with Akka’s let it crash fault tolerance model, you could achieve a clear separation between your business logic and your failure handling logic (supervision logic). All with very little effort. It’s pretty amazing. This is the topic of our discussion now. Actor Supervision Imagine a method call stack and the top most method in your stack throws an Exception. What could be done by the methods down the stack?    The exception could be caught and handled in order to recover The exception could be caught, may be logged and kept quiet. The methods down the stack could also choose to duck the exception completely (or may be caught and rethrown)Imagine if all the methods until the main method doesn’t handle the exception. In that case, the program exits after writing an essay for an exception to the console.You could also compare the same scenario with spawning Threads. If a child thread throws an exception and if the run or the call method doesn’t handle it, then the exception is expected to be handled by the parent thread or the main thread, whatever be the case. If the main thread doesn’t handle it, then the system exits. Let’s do it one more time – if the Child Actor which was created using the context.actorOf fails with an Exception, the parent actor (aka supervisor) could prefer to handle any failures of the child actor. If it does, it could prefer to handle it and recover (Restart/Resume). Else, duck the exception (Escalate) to it’s parent. Alternatively, it could just Stop the child actor – that’s the end of story for that child. Why did I say parent (aka supervisor)? Simply because Akka’s approach towards supervision is Parental supervision – which means that only the creators of the Actors could supervise over them. That’s it !! We have pretty much covered all the supervision Directives (what could be done about the failures). Strategies Ah, I forgot to mention this one : You already know that an Akka Actor could create children and that they could create as many children as they want. Now, consider two scenarios : 1. OneForOneStrategy Your Actor spawns multiple child actors and each one of these child actors connect to different datasources. Say you are running an app which translates an english word into multiple languages.Suppose, one child actor fails and you are fine to skip that result in the final list, what would you want to do? Shut down the service? Nope, you might want to just restart/stop only that child actor. Isn’t it? Now that’s called OneForOneStrategy in Akka supervision strategy terms – If one actor goes down, just handle one alone. Depending on your business exceptions, you would want to react differently (Stop, Restart, Escalate, Resume) to different exceptions. To configure your own strategy, you just override the supervisorStrategy in your Actor class. An example declaration of OneForOneStrategy would be import akka.actor.Actor import akka.actor.ActorLogging import akka.actor.OneForOneStrategy import akka.actor.SupervisorStrategy.Stopclass TeacherActorOneForOne extends Actor with ActorLogging {... ... override val supervisorStrategy=OneForOneStrategy() {case _: MinorRecoverableException => Restart case _: Exception => Stop} ... ... 2. AllForOneStrategy Assume that you are doing an External Sort (One more example to prove that my creativity sucks!!), and each of your chunk is handled by a different Actor. Suddenly, one Actor fails throwing an exception. It doesn’t make any sense to continue processing the rest of the chunks because the final result wouldn’t be correct. So, it is logical to Stop ~ALL~ the actors.Why did I say Stop instead of Restart in the previous line? Because Restarting would also not make any sense for this use-case considering the mailbox for each of these Actors would not be cleared on Restart. So, if we restart, the rest of the chunks would still be processed. That’s not what we want. Recreating the Actors with shiny new mailboxes would be the right approach here. Again, just like the OneForOneStrategy, you just override the supervisorStrategy with an implementation of AllForOneStrategy And example would be import akka.actor.{Actor, ActorLogging} import akka.actor.AllForOneStrategy import akka.actor.SupervisorStrategy.Escalate import akka.actor.SupervisorStrategy.Stopclass TeacherActorAllForOne extends Actor with ActorLogging {...override val supervisorStrategy = AllForOneStrategy() {case _: MajorUnRecoverableException => Stop case _: Exception => Escalate} ... ... Directives The constructor of both AllForOneStrategy and the OneForOneStrategy accepts a PartialFunction[Throwable,Directive] called Decider which maps a Throwable to a Directive as you may see here : case _: MajorUnRecoverableException => Stop There are simply just four kinds of directives – Stop, Resume, Escalate and Restart Stop The child actor is stopped in case of exception and any messages to the stopped actor would obviously go to the deadLetters queue. Resume The child actor just ignores the message that threw the exception and proceeds with processing the rest of the messages in the queue. Restart The child actor is stopped and a brand new actor is initialized. Processing of the rest of the messages in the mailbox continue. The rest of the world is unaware that this happened since the same ActorRef is attached to the new Actor. Escalate The supervisor ducks the failure and lets its supervisor handle the exception. Default Strategy What if our Actor doesn’t specify any Strategy but has created child Actors. How are they handled? There is a default strategy declared in the Actor trait which (if condensed) looks like below : override val supervisorStrategy=OneForOneStrategy() {case _: ActorInitializationException=> Stop case _: ActorKilledException => Stop case _: DeathPactException => Stop case _: Exception => Restart} So, in essence, the default strategy handles four cases : 1. ActorInitializationException => Stop When the Actor could not be initialized, it would throw an ActorInitializationException. The Actor would be stopped then. Let’s simulate it by throwing an exception in the preStart callback : package me.rerun.akkanotes.supervisionimport akka.actor.{ActorSystem, Props} import me.rerun.akkanotes.protocols.TeacherProtocol.QuoteRequest import akka.actor.Actor import akka.actor.ActorLoggingobject ActorInitializationExceptionApp extends App{val actorSystem=ActorSystem("ActorInitializationException") val actor=actorSystem.actorOf(Props[ActorInitializationExceptionActor], "initializationExceptionActor") actor!"someMessageThatWillGoToDeadLetter" }class ActorInitializationExceptionActor extends Actor with ActorLogging{ override def preStart={ throw new Exception("Some random exception") } def receive={ case _=> } } Running the ActorInitializationExceptionApp would generate a ActorInitializationException (duh!!) and then move all the messages into the message queue of the deadLetters Actor: Log [ERROR] [11/10/2014 16:08:46.569] [ActorInitializationException-akka.actor.default-dispatcher-2] [akka://ActorInitializationException/user/initializationExceptionActor] Some random exception akka.actor.ActorInitializationException: exception during creation at akka.actor.ActorInitializationException$.apply(Actor.scala:164) ... ... Caused by: java.lang.Exception: Some random exception at me.rerun.akkanotes.supervision.ActorInitializationExceptionActor.preStart(ActorInitializationExceptionApp.scala:17) ... ...[INFO] [11/10/2014 16:08:46.581] [ActorInitializationException-akka.actor.default-dispatcher-4] [akka://ActorInitializationException/user/initializationExceptionActor] Message from Actor[akka://ActorInitializationException/deadLetters] to Actor[akka://ActorInitializationException/user/initializationExceptionActor#-1290470495] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.2. ACTORKILLEDEXCEPTION => STOP When the Actor was killed using the Kill message, then it would throw an ActorKilledException. The default strategy would stop the child Actor if it throws the exception. At first, it seems that there’s no point in stopping an already killed Actor. However, consider this:ActorKilledException would just be propagated to the supervisor. What about the lifecycle watchers or deathwatchers of this Actor that we saw during DeathWatch. The watchers won’t know anything until the Actor is Stopped. Sending a Kill on an Actor would just affect that particular actor which the supervisor knows. However, handling that with Stop would suspend the mailbox of that Actor, suspends the mailboxes of child actors, stops the child actors, sends a Terminated to all the child actor watchers, send a Terminated to all the immediate failed Actor’s watchers and finally stop the Actor itself. (Wow, that’s pretty awesome !!)package me.rerun.akkanotes.supervisionimport akka.actor.{ActorSystem, Props} import me.rerun.akkanotes.protocols.TeacherProtocol.QuoteRequest import akka.actor.Actor import akka.actor.ActorLogging import akka.actor.Killobject ActorKilledExceptionApp extends App{val actorSystem=ActorSystem("ActorKilledExceptionSystem") val actor=actorSystem.actorOf(Props[ActorKilledExceptionActor]) actor!"something" actor!Kill actor!"something else that falls into dead letter queue" }class ActorKilledExceptionActor extends Actor with ActorLogging{ def receive={ case message:String=> log.info (message) } } Log The logs just say that once the ActorKilledException comes in, the supervisor stops that actor and then the messages go into the queue of deadLetters INFO m.r.a.s.ActorKilledExceptionActor - somethingERROR akka.actor.OneForOneStrategy - Kill akka.actor.ActorKilledException: KillINFO akka.actor.RepointableActorRef - Message from Actor[akka://ActorKilledExceptionSystem/deadLetters] to Actor[akka://ActorKilledExceptionSystem/user/$a#-1569063462] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 3. DeathPactException => Stop From DeathWatch, you know that when an Actor watches over a child Actor, it is expected to handle the Terminated message in its receive. What if it doesn’t? You get the DeathPactException   The code shows that the Supervisor watches the child actor after creation but doesn’t handle the Terminated message from the child. package me.rerun.akkanotes.supervisionimport akka.actor.{ActorSystem, Props} import me.rerun.akkanotes.protocols.TeacherProtocol.QuoteRequest import akka.actor.Actor import akka.actor.ActorLogging import akka.actor.Kill import akka.actor.PoisonPill import akka.actor.Terminatedobject DeathPactExceptionApp extends App{val actorSystem=ActorSystem("DeathPactExceptionSystem") val actor=actorSystem.actorOf(Props[DeathPactExceptionParentActor]) actor!"create_child" //Throws DeathPactException Thread.sleep(2000) //Wait until Stopped actor!"someMessage" //Message goes to DeadLetters}class DeathPactExceptionParentActor extends Actor with ActorLogging{def receive={ case "create_child"=> { log.info ("creating child") val child=context.actorOf(Props[DeathPactExceptionChildActor]) context.watch(child) //Watches but doesnt handle terminated message. Throwing DeathPactException here. child!"stop" } case "someMessage" => log.info ("some message") //Doesnt handle terminated message //case Terminated(_) => } }class DeathPactExceptionChildActor extends Actor with ActorLogging{ def receive={ case "stop"=> { log.info ("Actor going to stop and announce that it's terminated") self!PoisonPill } } } Log The logs tell us that the DeathPactException comes in, the supervisor stops that actor and then the messages go into the queue of deadLetters INFO m.r.a.s.DeathPactExceptionParentActor - creating childINFO m.r.a.s.DeathPactExceptionChildActor - Actor going to stop and announce that it's terminatedERROR akka.actor.OneForOneStrategy - Monitored actor [Actor[akka://DeathPactExceptionSystem/user/$a/$a#-695506341]] terminated akka.actor.DeathPactException: Monitored actor [Actor[akka://DeathPactExceptionSystem/user/$a/$a#-695506341]] terminatedINFO akka.actor.RepointableActorRef - Message from Actor[akka://DeathPactExceptionSystem/deadLetters] to Actor[akka://DeathPactExceptionSystem/user/$a#-1452955980] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. 4. Exception => Restart For all other Exceptions, the default Directive is to Restart the Actor. Check the following app. Just to prove that the Actor is restarted, OtherExceptionParentActor makes the child throw an exception and immediately sends a message. The message reaches the mailbox and when the the child actor restarts, the message gets processed. Nice !!package me.rerun.akkanotes.supervisionimport akka.actor.Actor import akka.actor.ActorLogging import akka.actor.ActorSystem import akka.actor.OneForOneStrategy import akka.actor.Props import akka.actor.SupervisorStrategy.Stopobject OtherExceptionApp extends App{val actorSystem=ActorSystem("OtherExceptionSystem") val actor=actorSystem.actorOf(Props[OtherExceptionParentActor]) actor!"create_child"}class OtherExceptionParentActor extends Actor with ActorLogging{def receive={ case "create_child"=> { log.info ("creating child") val child=context.actorOf(Props[OtherExceptionChildActor])child!"throwSomeException" child!"someMessage" } } }class OtherExceptionChildActor extends akka.actor.Actor with ActorLogging{override def preStart={ log.info ("Starting Child Actor") }def receive={ case "throwSomeException"=> { throw new Exception ("I'm getting thrown for no reason") } case "someMessage" => log.info ("Restarted and printing some Message") }override def postStop={ log.info ("Stopping Child Actor") }} Log The logs of this program is pretty neat.The exception gets thrown. We see the trace The child restarts – Stop and Start gets called (we’ll see about the preRestart and postRestart callbacks soon) The message that was send to the Child actor before restart is processed.INFO m.r.a.s.OtherExceptionParentActor - creating childINFO m.r.a.s.OtherExceptionChildActor - Starting Child ActorERROR akka.actor.OneForOneStrategy - I'm getting thrown for no reasonjava.lang.Exception: I'm getting thrown for no reason at me.rerun.akkanotes.supervision.OtherExceptionChildActor$$anonfun$receive$2.applyOrElse(OtherExceptionApp.scala:39) ~[classes/:na] at akka.actor.Actor$class.aroundReceive(Actor.scala:465) ~[akka-actor_2.11-2.3.4.jar:na] ... ...INFO m.r.a.s.OtherExceptionChildActor - Stopping Child ActorINFO m.r.a.s.OtherExceptionChildActor - Starting Child ActorINFO m.r.a.s.OtherExceptionChildActor - Restarted and printing some Message Escalate and Resume We saw examples of Stop and Restart via the defaultStrategy. Now, let’s have a quick look at the Escalate. Resume just ignores the exception and proceeds processing the next message in the mailbox. It’s more like catching the exception and doing nothing about it. Awesome stuff but not a lot to talk about there. Escalating generally means that the exception is something critical and the immediate supervisor would not be able to handle it. So, it asks help from it’s supervisor. Let’s take an example. Consider three Actors – EscalateExceptionTopLevelActor, EscalateExceptionParentActor and EscalateExceptionChildActor. If the child actor throws an exception and if the parent level actor could not handle it, it could Escalate it to the Top level Actor. The Top level actor could choose to react with any of the Directives. In our example, we just Stop. Stop would stop the immediate child (which is the EscalateExceptionParentActor). As you know, when a Stop is executed on an Actor, all its children would also be stopped before the Actor itself is stopped.package me.rerun.akkanotes.supervisionimport akka.actor.Actor import akka.actor.ActorLogging import akka.actor.ActorSystem import akka.actor.OneForOneStrategy import akka.actor.Props import akka.actor.SupervisorStrategy.Escalate import akka.actor.SupervisorStrategy.Stop import akka.actor.actorRef2Scalaobject EscalateExceptionApp extends App {val actorSystem = ActorSystem("EscalateExceptionSystem") val actor = actorSystem.actorOf(Props[EscalateExceptionTopLevelActor], "topLevelActor") actor ! "create_parent" }class EscalateExceptionTopLevelActor extends Actor with ActorLogging {override val supervisorStrategy = OneForOneStrategy() { case _: Exception => { log.info("The exception from the Child is now handled by the Top level Actor. Stopping Parent Actor and its children.") Stop //Stop will stop the Actor that threw this Exception and all its children } }def receive = { case "create_parent" => { log.info("creating parent") val parent = context.actorOf(Props[EscalateExceptionParentActor], "parentActor") parent ! "create_child" //Sending message to next level } } }class EscalateExceptionParentActor extends Actor with ActorLogging {override def preStart={ log.info ("Parent Actor started") }override val supervisorStrategy = OneForOneStrategy() { case _: Exception => { log.info("The exception is ducked by the Parent Actor. Escalating to TopLevel Actor") Escalate } }def receive = { case "create_child" => { log.info("creating child") val child = context.actorOf(Props[EscalateExceptionChildActor], "childActor") child ! "throwSomeException" } }override def postStop = { log.info("Stopping parent Actor") } }class EscalateExceptionChildActor extends akka.actor.Actor with ActorLogging {override def preStart={ log.info ("Child Actor started") }def receive = { case "throwSomeException" => { throw new Exception("I'm getting thrown for no reason.") } } override def postStop = { log.info("Stopping child Actor") } } Log As you could see from the logs,The child actor throws exception. The immediate supervisor (EscalateExceptionParentActor) escalates that exception to its supervisor (EscalateExceptionTopLevelActor) The resultant directive from EscalateExceptionTopLevelActor is to Stop the Actor. As a sequence, the child actors gets stopped first. The parent actor gets stopped next (only after the watchers have been notified)INFO m.r.a.s.EscalateExceptionTopLevelActor - creating parentINFO m.r.a.s.EscalateExceptionParentActor - Parent Actor startedINFO m.r.a.s.EscalateExceptionParentActor - creating childINFO m.r.a.s.EscalateExceptionChildActor - Child Actor startedINFO m.r.a.s.EscalateExceptionParentActor - The exception is ducked by the Parent Actor. Escalating to TopLevel ActorINFO m.r.a.s.EscalateExceptionTopLevelActor - The exception from the Child is now handled by the Top level Actor. Stopping Parent Actor and its children.ERROR akka.actor.OneForOneStrategy - I'm getting thrown for no reason. java.lang.Exception: I'm getting thrown for no reason. at me.rerun.akkanotes.supervision.EscalateExceptionChildActor$$anonfun$receive$3.applyOrElse(EscalateExceptionApp.scala:71) ~[classes/:na] ... ...INFO m.r.a.s.EscalateExceptionChildActor - Stopping child ActorINFO m.r.a.s.EscalateExceptionParentActor - Stopping parent Actor Please note that whatever directive that was issued would only apply to the immediate child that escalated. Say, if a Restart is issued at the TopLevel, only the Parent would be restarted and anything in its constructor/preStart would be executed. If the children of the Parent actor was created in the constructor, they would also be created. However, children that were created through messages to the Parent Actor would still be in the Terminated state. TRIVIA Actually, you could control whether the preStart gets called at all. We’ll see about this in the next minor write-up. If you are curious, just have a look at the postRestart method of the Actor. def postRestart(reason: Throwable): Unit = { preStart() } Code As always, code is on github. (my .gitignore wasn’t setup right for this project. will fix it today. sorry).Reference: Akka Notes – Actor Supervision – 8 from our JCG partner Arun Manivannan at the Rerun.me blog....

Spring Rest API with Swagger – Exposing documentation

Once you create API documentation it is important to make it available to the stakeholders. In ideal case, this published documentation would be flexible enough to account for any last-minute changes and also be easy to distribute (in terms of costs as well as time needed to accomplish this). To make this possible we will make use of what was accomplished in my previous post detailing the process of creation of API documentation. Using Swagger UI module in combination with published API documentation in json allows us to create simple HTML documentation that may be used to interact with the APIs as well.       Integration with Swagger UI Makers of Swagger UI describe it as a dependency-free collection of HTML, Javascript, and CSS assets that dynamically generate beautiful documentation and sandbox from a Swagger-compliant API. Because Swagger UI has no dependencies, you can host it in any server environment, or on your local machine. This being said lets take a look at how can we feed our Swagger documentation to Swagger UI. Being static collection of HTML, CSS and JS allows us to just drop it in our project with no need to modify our pom.xml or any other file within our project. Just head over to GitHub and download the latest files. When you are done the only thing needed is to supply a link to your API listing. Just open index.html and replace the default API listing URL with your own and you are done. URL from my example looks like this: http://[hostname]:[port]/SpringWithSwagger/rest/api-docs. After saving this change and deploying both your application as well as your static Swagger UI, you should be able to browse and interact with your APIs. API documentation Based on my example, I am able to access my documentation on following URL http://localhost:8080/SpringWithSwagger/apidocs/ (due to the nature of deployment approach I have chosen). As you can see, Swagger UI just consumes data published in json format (discussed previously). The first thing you see is the API listing which allows you to browse your collection of published APIs.When you want to browse operations available you are just one click away from nice colorful listing of all operations with short description so you know where to navigate next. Colors are consistent throughout the entire listing and complement the operation well.When you found the operation you were looking for, its time to get the information you were seeking in the first place. By clicking the method name you will be presented with full method description as well as parameters and response messages. But there is more because you can play around with your APIs and test your methods. By supplying all the required parameters and hitting ‘Try it out!’ button you are able to check whether your application server is up and behaves in an expected way. If your code requires some sort of file upload (just like my update users avatar logic does) Swagger UI has handy tools to make this as easy as possible.Even though you are able to do some quick and ad-hoc tests or checks, this tool is in no way suitable for application testing. All it does is presents API documentation in a nice to read fashion with the possibility to try the method yourself if you feel the need to (in order to improve your comprehension of the documentation). I find this really nice to have there and given you need to get the feel of the operation itself and its observable behavior Swagger UI got you covered as you can see below.Where it excels I really like the way Swagger approaches documentation as well as the way Swagger UI presents it. Following are several points making Swagger pretty sweet solution for my API documentation needs:Language agnosticGreat property to have, when working in heterogeneous environment or considering introduction of new languages and tools to your projects.Annotation-based documentationAnnotations bind documentation to code creating one unit with single life cycle. This makes the whole process of managing, releasing and publishing much more easier and allows for automation to take place.Open for post-processingHaving a middle step in form of json allows developers to append custom scripts and transformers to the process to produce documentation in various formats like PDF or Word document based on stakeholders needs.Rich ecosystem of modules and componentsIf you browse available modules and components of Swagger you might be amazed how much time was invested in this tool. There are many useful components out there, so it is highly probable that you will find some extensions to Swagger that you think your project might need or benefit from.Visually beautiful UI toolSince I am not very talented when it comes to UI, I am really happy that I don’t have to bother with coming up with the way how to create, format, present and deliver my documentation. All I need is to provide relevant information right in the source code and that’s it. Framework takes care of the rest and I end up with presentable documentation in no time. Given the nature of Swagger UI it is really easy to add custom corporate identity to it, if that is required.Try it out! optionIt’s always the little things that make my day. But I believe that it is extremely beneficial for whole team to have this option neatly packed in the documentation (e.g. right where you need it, when you need it).Where it comes short I am not going to pretend that this is the silver bullet, suits all solution. There are certainly situations where tools like this are not preferred.  Given its young age there are still some things to be added/improved. But it’s important to state that this project is still being developed and gains more popularity each passing day. This being said I want to point out some issues I found that required some digging around and additional work. I am going to focus on three main concerns I found troubling during my first attempts.Conditional access to certain model parametersBased on your needs (and also Swagger version used), you might find yourself in need to hide certain model parameters from Swagger UI and Swagger json. However, this requires a little bit more work, than I expected and involves modification of model properties. One can expect things to get better with the next major release of Swagger and related components, but until then you are forced to do this by hand. If you are interested in how to achieve this check out my next post called Spring Rest API with Swagger – Fine-tuning exposed documentation.File upload and related fieldsSome of your API operations may require file uploads (like my avatar update method). However, to get the operation detail to look like it is presented in my example requires little bit of manual work and specification filtering. To get rid of unwanted parameters related to this issue check out my next post called Spring Rest API with Swagger – Fine-tuning exposed documentation where I will detail this issue and how to get results presented here.API models and XMLSwagger claims it’s friends with both json and XML. This is certainly true on operational level, however when it comes to model presentation XML comes second compared to json (due to technical complexities related to XML and its schema). Currently, all API models in Swagger UI are displayed as json entities (both json and XML), which forced me not to declare response type in ProductsEndpoint documentation (example of endpoint using XML format in my SpringWithSwagger example). This is something I haven’t resolved to my full satisfaction so I intentionally choose not to declare response types while dealing with XML.What is next? If you followed all steps you should now have working API documentation / sandbox for your APIs. I will showcase how to fine-tune published documentation using Swagger in my next article called Spring Rest API with Swagger – Fine-tuning exposed documentation. The code used in this micro series is published on GitHub and provides examples for all discussed features and tools. Please enjoy!Reference: Spring Rest API with Swagger – Exposing documentation from our JCG partner Jakub Stas at the Jakub Stas blog....

An entity modelling strategy for scaling optimistic locking

Introduction Application-level repeatable reads are suitable for preventing lost updates in web conversations. Enabling entity-level optimistic locking is fairly easy. You just have to mark one logical-clock property (usually an integer counter) with the JPA @Version annotation and Hibernate takes care of the rest.           The catch Optimistic locking discards all incoming changes that are relative to an older entity version. But everything has a cost and optimistic locking makes no difference. The optimistic concurrency control mechanism takes an all-or-nothing approach even for non-overlapping changes. If two concurrent transactions are changing distinct entity property subsets then there’s no risk of losing updates. Two concurrent updates, starting from the same entity version are always going to collide. It’s only the first update that’s going to succeed, the second one failing with an optimistic locking exception. This strict policy acts as if all changes are overlapping. For highly concurrent write scenarios, this single-version check strategy can lead to a large number of roll-backed updates. Time for testing Let’s say we have the following Product entity:This entity is updated by three users (e.g. Alice, Bob and Vlad), each one updating a distinct property subset. The following diagram depicts their actions:The SQL DML statement sequence goes like this: #create tables Query:{[create table product (id bigint not null, description varchar(255) not null, likes integer not null, name varchar(255) not null, price numeric(19,2) not null, quantity bigint not null, version integer not null, primary key (id))][]} Query:{[alter table product add constraint UK_jmivyxk9rmgysrmsqw15lqr5b unique (name)][]}#insert product Query:{[insert into product (description, likes, name, price, quantity, version, id) values (?, ?, ?, ?, ?, ?, ?)][Plasma TV,0,TV,199.99,7,0,1]}#Alice selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_, optimistic0_.version as version7_0_0_ from product optimistic0_ where optimistic0_.id=?][1]} #Bob selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_, optimistic0_.version as version7_0_0_ from product optimistic0_ where optimistic0_.id=?][1]} #Vlad selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_, optimistic0_.version as version7_0_0_ from product optimistic0_ where optimistic0_.id=?][1]}#Alice updates the product Query:{[update product set description=?, likes=?, name=?, price=?, quantity=?, version=? where id=? and version=?][Plasma TV,0,TV,199.99,6,1,1,0]}#Bob updates the product Query:{[update product set description=?, likes=?, name=?, price=?, quantity=?, version=? where id=? and version=?][Plasma TV,1,TV,199.99,7,1,1,0]} c.v.h.m.l.c.OptimisticLockingOneRootOneVersionTest - Bob: Optimistic locking failure org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.OptimisticLockingOneRootOneVersionTest$Product#1]#Vlad updates the product Query:{[update product set description=?, likes=?, name=?, price=?, quantity=?, version=? where id=? and version=?][Plasma HDTV,0,TV,199.99,7,1,1,0]} c.v.h.m.l.c.OptimisticLockingOneRootOneVersionTest - Vlad: Optimistic locking failure org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.OptimisticLockingOneRootOneVersionTest$Product#1] Because there’s only one entity version, it’s just the first transaction that’s going to succeed. The second and the third updates are discarded since they reference an older entity version. Divide et impera If there are more than one writing patterns, we can divide the original entity into several sub-entities. Instead of only one optimistic locking counter, we now have one distinct counter per each sub-entity. In our example, the quantity can be moved to ProductStock and the likes to ProductLiking.Whenever we change the product quantity, it’s only the ProductStock version that’s going to be checked, so other competing quantity updates are prevented. But now, we can concurrently update both the main entity (e.g. Product) and each individual sub-entity (e.g. ProductStock and ProductLiking):Running the previous test case yields the following output: #create tables Query:{[create table product (id bigint not null, description varchar(255) not null, name varchar(255) not null, price numeric(19,2) not null, version integer not null, primary key (id))][]} Query:{[create table product_liking (likes integer not null, product_id bigint not null, primary key (product_id))][]} Query:{[create table product_stock (quantity bigint not null, product_id bigint not null, primary key (product_id))][]} Query:{[alter table product add constraint UK_jmivyxk9rmgysrmsqw15lqr5b unique (name)][]} Query:{[alter table product_liking add constraint FK_4oiot8iambqw53dwcldltqkco foreign key (product_id) references product][]} Query:{[alter table product_stock add constraint FK_hj4kvinsv4h5gi8xi09xbdl46 foreign key (product_id) references product][]}#insert product Query:{[insert into product (description, name, price, version, id) values (?, ?, ?, ?, ?)][Plasma TV,TV,199.99,0,1]} Query:{[insert into product_liking (likes, product_id) values (?, ?)][0,1]} Query:{[insert into product_stock (quantity, product_id) values (?, ?)][7,1]}#Alice selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.name as name3_0_0_, optimistic0_.price as price4_0_0_, optimistic0_.version as version5_0_0_ from product optimistic0_ where optimistic0_.id=?][1]} Query:{[select optimistic0_.product_id as product_2_1_0_, optimistic0_.likes as likes1_1_0_ from product_liking optimistic0_ where optimistic0_.product_id=?][1]} Query:{[select optimistic0_.product_id as product_2_2_0_, optimistic0_.quantity as quantity1_2_0_ from product_stock optimistic0_ where optimistic0_.product_id=?][1]}#Bob selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.name as name3_0_0_, optimistic0_.price as price4_0_0_, optimistic0_.version as version5_0_0_ from product optimistic0_ where optimistic0_.id=?][1]} Query:{[select optimistic0_.product_id as product_2_1_0_, optimistic0_.likes as likes1_1_0_ from product_liking optimistic0_ where optimistic0_.product_id=?][1]} Query:{[select optimistic0_.product_id as product_2_2_0_, optimistic0_.quantity as quantity1_2_0_ from product_stock optimistic0_ where optimistic0_.product_id=?][1]}#Vlad selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.name as name3_0_0_, optimistic0_.price as price4_0_0_, optimistic0_.version as version5_0_0_ from product optimistic0_ where optimistic0_.id=?][1]} Query:{[select optimistic0_.product_id as product_2_1_0_, optimistic0_.likes as likes1_1_0_ from product_liking optimistic0_ where optimistic0_.product_id=?][1]} Query:{[select optimistic0_.product_id as product_2_2_0_, optimistic0_.quantity as quantity1_2_0_ from product_stock optimistic0_ where optimistic0_.product_id=?][1]}#Alice updates the product Query:{[update product_stock set quantity=? where product_id=?][6,1]}#Bob updates the product Query:{[update product_liking set likes=? where product_id=?][1,1]}#Vlad updates the product Query:{[update product set description=?, name=?, price=?, version=? where id=? and version=?][Plasma HDTV,TV,199.99,1,1,0]} All three concurrent transactions are successful because we no longer have only one logical-clock version but three of them, according to three distinct write responsibilities. Conclusion When designing the persistence domain model, you have to take into consideration both the querying and writing responsibility patterns. Breaking a larger entity into several sub-entities can help you scale updates, while reducing the chance of optimistic locking failures. If you wary of possible performance issues (due to entity state fragmentation) you should then know that Hibernate offers several optimization techniques for overcoming the scattered entity info side-effect. You can always join all sub-entities in a single SQL query, in case you need all entity related data. The second-level caching is also a good solution for fetching sub-entities without hitting the database. Because we split the root entity in several entities, the cache can be better utilized. A stock update is only going to invalidate the associated ProductStock cache entry, without interfering with Product and ProductLiking cache regions.Code available on GitHub.Reference: An entity modelling strategy for scaling optimistic locking from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Java Annotations Tutorial – The ULTIMATE Guide (PDF Download)

EDITORIAL NOTE: Annotations in Java are a major feature and every Java developer should know how to utilize them. We have provided an abundance of tutorials here at Java Code Geeks, like Creating Your Own Java Annotations, Java Annotations Tutorial with Custom Annotation and Java Annotations: Explored & Explained. We also featured articles on annotations used in various libraries, including Make your Spring Security @Secured annotations more DRY and Java Annotations & A Real World Spring Example. Now, it is time to gather all the information around Annotations under one reference post for your reading pleasure. Enjoy!Want to master Java Annotations ?Subscribe to our newsletter and download the Java Annotations Ultimate Guide right now! In order to help you master the topic of Annotations, we have compiled a kick-ass guide with all the major features and use cases! Besides studying them online you may download the eBook in PDF format!Email address:Given email address is already subscribed, thank you!Oops. Something went wrong. Please try again later.Please provide a valid email address.Thank you, your sign-up request was successful! Please check your e-mail inbox.Please complete the CAPTCHA.Please fill in the required fields.Table Of Contents1. Why annotations? 2. Introduction 3. Consumers 4. Annotations syntax and annotation elements 5. Where can be used 6. Use cases 7. Built in annotations 8. Java 8 and annotations 9. Custom annotations 10. Retrieving Annotations 11. Inheritance in annotations 12. Known libraries using annotations 13. Summary 14. Download 15. Resources  In this article we are going to explain what Java annotations are, how they work and what can be done using annotations in Java. We will show what annotations come with Java out of the box, also called Built in or Meta annotations and what new features are available in Java 8 related to them. Finally we will implement a custom annotation and a processor application (consumer) that makes use of annotations one using reflection in Java. We will list some very well known and broadly used libraries based on annotations like Junit, JAXB, Spring and Hibernate. At the end of this article you can find a compressed file with all the examples shown in this tutorial. In the implementation of these examples following software versions were used:Eclipse Luna 4.4 JRE Update 8.20 Junit 4 Hibernate 4.3.6 FindBugs 3.0.01. Why annotations? Annotations were introduced in Java in the J2SE update 5 already and the main reason was the need to provide a mechanism that allows programmers to write metadata about their code directly in the code itself. Before annotations, the way programmers were describing their code was not standardized and each developer did it in his own original way: using transient keywords, via comments, with interfaces, etc. This was not a good approach and a decision was taken: a new form of metadata is going to be available in Java, annotations were introduced. Other circumstances help to that decision: At that moment, XML was used as standard code configuration mechanism for different type of applications. This was not the best way to do it because of the decoupling between code and XML (XML is not code!) and the future maintenance of this decoupled applications. There were other reasons like for example the usage of the reserved word “@deprecated” (with small d) in the Javadocs since Java update 4, I am very sure that this was one of the reasons for the current annotations syntax using the “@”. The main Java Specification Requests involved in the annotations design and development are:JSR 175 A metadata facility for the Java programming Language JSR 250 Common Annotations for the Java Platform2. Introduction The best way to explain what an annotation is is the word metadata: data that contains information about itself. Annotations are code metadata; they contain information about the code itself. Annotations can be used within packages, classes, methods, variables and parameters. Since Java 8 annotations can be placed almost in every place of the code, this is called type annotations; we will see this in this tutorial more in detail. The annotated code is not directly affected by their annotations. These only provide information about it to third systems that may use (or not) the annotations for different purposes. Annotations are compiled in the class files and can be retrieved in run time and used with some logical purpose by a consumer or processor. It is also possible to create annotations that are not available at runtime, even is possible to create annotations that are only available in the source and not at compilation time. 3. Consumers It can be difficult to understand what is actually the annotations purpose and what they can be used for, they do not contain any kind of functional logic and they do not affect the code that are annotating, what are they for? The explanation to this is what I call the annotation consumers. These are systems or applications that are making use of the annotated code and executing different actions depending on the annotations information. For example, in the case of the built in annotations (Meta annotations) that come out of the box with the standard Java, the consumer is the Java Virtual Machine (JVM) executing the annotated code. They are other examples that we are going to see later in this tutorial like Junit, where the consumer is the Junit processor reading and analyzing the annotated test classes, and deciding for example, depending on the annotation, in which order the unit tests are going to be executed or what methods are going to be executed before and after every test. We will see this in more detail in the Junit related chapter. Consumers use reflection in Java in order to read and analyze the annotated source code. The main packages used for this purpose are java.lang and java.lang.reflect. We will explain in this tutorial how to create a custom consumer from scratch using reflection. 4. Annotations syntax and annotation elements An annotation is declared using the character ‘@’ as prefix of the annotation name. This indicates the compiler that this element is an annotation. An example: @Annotation public void annotatedMehod() { ... }The annotation above is called “Annotation” and is annotating the method annotatedMethod(). The compiler will take care of it. An annotation has elements in form of key-values. These “elements” are the properties of the annotation. @Annotation( info = "I am an annotation", counter = "55" ) public void annotatedMehod() { ... }If the annotation contains only one element (or if only one element needs to be specified because the rest have default values), we can do something like: @Annotation("I am an annotation") public void annotatedMehod() { ... }As we saw in the first example, if no elements need to be specified, then the parentheses are not needed. Multiple annotations are possible for an element, in this case for a class: @ Annotation (info = "U a u O") @ Annotation2 class AnnotatedClass { ... }Some annotations come out of the box with Java; these are called built in annotations. It is also possible to define your own annotations, these are called custom annotations. We will see these in the next chapters. 5. Where can be used Annotations can be used basically in almost every element of a Java program: classes, fields, methods, packages, variables, etc. Since Java 8 the concept of annotations by type is available. Before Java 8 annotations should be used only in the declarations of the elements listed before. After Java 8 also in declaration of types an annotation can be used. Something like the following is now available: @MyAnnotation String str = "danibuiza"; We will see this mechanism in more detail in the chapter related to Java 8 annotations. 6. Use cases Annotations can be used for many different purposes, the most common ones are:Information for the compiler: Annotations can be used by the compiler to produce warnings or even errors based on different rules. One example of this kind of usage is the Java 8 @FunctionalInterface annotation. This one makes the compiler to validate the annotated class and check if it is a correct functional interface or not. Documentation: Annotations can be used by software applications to measure the quality of the code like FindBugs or PMD do or generate reports automatically like Jenkins, Jira or Teamcity. Code generation: annotations can be used to generate code or XML files automatically using metadata information present in the code. A good example of this is the JAXB library. Runtime processing: Annotations that are examined in runtime can be used for different objectives like unit testing (Junit), dependency injection (Spring), validation, logging (Log4J) ,data access (Hibernate) etc.In this tutorial we will show several possible usages of annotations and we will show how very well known Java libraries are using them 7. Built in annotations The Java language comes with a set of default annotations. In this chapter we are going to explain the most important ones. It is important to mention that this list refers only to the core packages of the Java language and does not include all the packages and libraries available in the standard JRE like JAXB or the Servlet specification Some of the following standard annotations are called Meta annotations; their targets are other annotations and contain information about them:@Retention: This annotation annotates other annotations and it is used to indicate how to store the marked annotation. This annotation is a kind of Meta annotation, since it is marking an annotation and informing about its nature. Possible values are:SOURCE: Indicates that this annotation is ignored by compiler and JVM (no available at run time) and it is only retained in the source. CLASS: Indicates that the annotation is going to be retained by the compiler but ignored by the JVM and because of this, not going to be available at run time. RUNTIME: Means that the annotation is going to be retained by the Java Virtual Machine and can be used in runtime via reflection.We will see several examples of this annotation in this tutorial.@Target: This one restricts the elements that an annotation can be applied to. Only one type is possible. Here is a list of available types:ANNOTATION_TYPE means that the annotation can be applied to other annotation. CONSTRUCTOR can be applied to a constructor. FIELD can be applied to a field or property. LOCAL_VARIABLE can be applied to a local variable. METHOD can be applied to a method-level annotation. PACKAGE can be applied to a package declaration. PARAMETER can be applied to the parameters of a method. TYPE can be applied to any element of a class.@Documented: The annotated elements are going to be documented using the Javadoc tool. Per default annotations are not documented. This annotation can be applied to other annotation. @Inherited: By default annotations are not inherited by subclasses. This annotation marks an annotation to automatic inherit to all subclasses extending the annotated class. This annotation can be applied to class elements. @Deprecated: Indicates that the annotated element should not be used. This annotation gets the compiler to generate a warning message. Can be applied to methods, classes and fields. An explanation about why this element is deprecated and alternative usages should be provided when using this annotation. @SuppressWarnings: Indicates the compiler not to produce warnings for an specific reason or reasons. For example if we do not want to get warnings because of unused private methods we can write something like:@SuppressWarnings( "unused") private String myNotUsedMethod(){ ... }Normally the compiler would produce a warning if this method is not used; using this annotation prevents that behavior. This annotation expects one or more parameters with the warnings categories to avoid.@Override: Indicates the compiler that the element is overriding an element of the super class. This annotation is not mandatory to use when overriding elements but it helps the compiler to generate errors when the overriding is not done correctly, for example if the sub class method parameters are different than the super class ones, or if the return type does not match. @SafeVarargs: Asserts that the code of the method or constructor does not perform unsafe operations on its arguments. Future versions of the Java language will make the compiler to produce an error at compilation time in case of potential unsafe operations while using this annotation. For more information about this one one http://docs.oracle.com/javase/7/docs/api/java/lang/SafeVarargs.html8. Java 8 and annotations Java 8 comes out with several advantages. Also the annotations framework is improved. In this chapter we are going to explain and provide examples of the 3 main topics introduced in the eighth Java update: the @Repeatable annotation, the introduction of type annotation declarations and the functional interface annotation @FunctionalInterface (used in combination with Lambdas).@Repeatable: indicates that an annotation annotated with this one can be applied more than once to the same element declaration.Here is an example of usage. First of all we create a container for the annotation that is going to be repeated or that can be repeated: /** * Container for the {@link CanBeRepeated} Annotation containing a list of values */ @Retention( RetentionPolicy.RUNTIME ) @Target( ElementType.TYPE_USE ) public @interface RepeatedValues { CanBeRepeated[] value(); }Afterwards, we create the annotation itself and we mark it with the Meta annotation @Repeatable: @Retention( RetentionPolicy.RUNTIME ) @Target( ElementType.TYPE_USE ) @Repeatable( RepeatedValues.class ) public @interface CanBeRepeated {String value(); }Finally we can see how to use it (repeatedly) in a given class: @CanBeRepeated( "the color is green" ) @CanBeRepeated( "the color is red" ) @CanBeRepeated( "the color is blue" ) public class RepeatableAnnotated {}If we would try to do the same with a non repeatable annotation: @Retention( RetentionPolicy.RUNTIME ) @Target( ElementType.TYPE_USE ) public @interface CannotBeRepeated {String value(); }@CannotBeRepeated( "info" ) /* * if we try repeat the annotation we will get an error: Duplicate annotation of non-repeatable type * * @CannotBeRepeated. Only annotation types marked * * @Repeatable can be used multiple times at one target. */ // @CannotBeRepeated( "more info" ) public class RepeatableAnnotatedWrong {}We would get an error from the compiler like: Duplicate annotation of non-repeatable typeSince Java 8 is also possible to use annotations within types. That is anywhere you can use a type, including the new operator, castings, implements and throws clauses. Type Annotations allow improved analysis of Java code and can ensure even stronger type checking. The following examples clarify this point:@SuppressWarnings( "unused" ) public static void main( String[] args ) { // type def @TypeAnnotated String cannotBeEmpty = null;// type List<@TypeAnnotated String> myList = new ArrayList<String>();// values String myString = new @TypeAnnotated String( "this is annotated in java 8" );}// in method params public void methodAnnotated( @TypeAnnotated int parameter ) { System.out.println( "do nothing" ); }All this was not possible until Java 8.@FunctionalInterface: this annotation indicates that the element annotated is going to be a functional interface. A functional interface is an interface that has just one abstract method (not a default one). The compiler will handle the annotated element as a functional interface and will produce an error if the element does not comply with the needed requirements. Here is an example of functional interface annotation:// implementing its methods @SuppressWarnings( "unused" ) MyCustomInterface myFuncInterface = new MyCustomInterface() {@Override public int doSomething( int param ) { return param * 10; } };// using lambdas @SuppressWarnings( "unused" ) MyCustomInterface myFuncInterfaceLambdas = ( x ) -> ( x * 10 ); }@FunctionalInterface interface MyCustomInterface { /* * more abstract methods will cause the interface not to be a valid functional interface and * the compiler will thrown an error:Invalid '@FunctionalInterface' annotation; * FunctionalInterfaceAnnotation.MyCustomInterface is not a functional interface */ // boolean isFunctionalInterface();int doSomething( int param ); }This annotation can be applied to classes, interfaces, enums and annotations and it is retained by the JVM and available in Run time. Here is its declaration: @Documented @Retention(value=RUNTIME) @Target(value=TYPE) public @interface FunctionalInterface For more information about this annotation http://docs.oracle.com/javase/8/docs/api/java/lang/FunctionalInterface.html. 9. Custom annotations As we mentioned several times in this tutorial, it is possible to define and implement custom annotations. In this chapter we are going to show how to do this. First of all, define the new annotation: public @interface CustomAnnotationClass This creates a new type of annotation called CustomAnnotationClass. The special word used for this purpose is @interface, this indicates the definition of a custom annotation. After this, you need to define a couple of mandatory attributes for this annotation, the retention policy and the target. There are other attributes that can be defined here, but these are the most common and important ones. These are declared in form of annotations of an annotation and were described in the chapter “Built in annotations” since they are annotations that came out of the box with Java. So we define these properties for our new custom annotation: @Retention( RetentionPolicy.RUNTIME ) @Target( ElementType.TYPE ) public @interface CustomAnnotationClass implements CustomAnnotationMethodWith the retention policy RUNTIME we indicate to the compiler that this annotation should be retained by the JVM and can be analyzed in runtime using reflection. With the element type TYPE we are indicating that this annotation can be applied to any element of a class. Afterwards we define a couple of properties for this annotation: @Retention( RetentionPolicy.RUNTIME ) @Target( ElementType.TYPE ) public @interface CustomAnnotationClass {public String author() default "danibuiza";public String date();}Above we have just defined the property author, with the default value “danibuiza” and the property date, without default value. We should mention that all method declarations cannot have parameters and are not allowed to have a thrown clause. The return types are restricted to String, Class, enums, annotations and arrays of the types mentioned before. Now we can use our fresh created custom annotation in the following way: @CustomAnnotationClass( date = "2014-05-05" ) public class AnnotatedClass { ... }In a similar way we can create an annotation to be used in method declarations, using the target METHOD: @Retention( RetentionPolicy.RUNTIME ) @Target( ElementType.METHOD ) public @interface CustomAnnotationMethod { public String author() default "danibuiza";public String date();public String description();}This one can be used in a method declaration like: @CustomAnnotationMethod( date = "2014-06-05", description = "annotated method" ) public String annotatedMethod() { return "nothing niente"; }@CustomAnnotationMethod( author = "friend of mine", date = "2014-06-05", description = "annotated method" ) public String annotatedMethodFromAFriend() { return "nothing niente"; }There are many other properties that can be used with custom annotations, but target and retention policy are the most important ones. 10. Retrieving Annotations The Java reflection API contains several methods that can be used to retrieve in runtime annotations from classes, methods and other elements. The interface that contains all these methods is AnnotatedElement and the most important ones are:getAnnotations(): Returns all annotations for the given element, also the ones that are not explicitly defined in the element definition. isAnnotationPresent(annotation): Checks if the passed annotation in available or not in the current element. getAnnotation(class): Retrieves an specific annotation passed as parameter. Returns null if this annotation is not present for the given element.This class is implementing by java.lang.Class, java.lang.reflect.Method and java.lang.reflect.Field among others, so can be used basically with any kind of Java element. Now we are going to see an example of how to read the annotations present in a class or method using the methods listed above: We write a program that tries to read all the annotations present in a class and its methods (we use for this example the classes defined before):public static void main( String[] args ) throws Exception {Class<AnnotatedClass> object = AnnotatedClass.class; // Retrieve all annotations from the class Annotation[] annotations = object.getAnnotations(); for( Annotation annotation : annotations ) { System.out.println( annotation ); }// Checks if an annotation is present if( object.isAnnotationPresent( CustomAnnotationClass.class ) ) {// Gets the desired annotation Annotation annotation = object.getAnnotation( CustomAnnotationClass.class );System.out.println( annotation );} // the same for all methods of the class for( Method method : object.getDeclaredMethods() ) {if( method.isAnnotationPresent( CustomAnnotationMethod.class ) ) {Annotation annotation = method.getAnnotation( CustomAnnotationMethod.class );System.out.println( annotation );}} }The output of this program would be: @com.danibuiza.javacodegeeks.customannotations.CustomAnnotationClass(getInfo=Info, author=danibuiza, date=2014-05-05)@com.danibuiza.javacodegeeks.customannotations.CustomAnnotationClass(getInfo=Info, author=danibuiza, date=2014-05-05)@com.danibuiza.javacodegeeks.customannotations.CustomAnnotationMethod(author=friend of mine, date=2014-06-05, description=annotated method) @com.danibuiza.javacodegeeks.customannotations.CustomAnnotationMethod(author=danibuiza, date=2014-06-05, description=annotated method)In the program above we can see the usage of the method getAnnotations() in order to retrieve all annotations for a given object (a method or a class). We also showed how to check if an specific annotation is present and to retrieve it in positive case using the methods isAnnotationPresent() and getAnnotation(). 11. Inheritance in annotations Annotations can use inheritance in Java. This inheritance has nothing or almost nothing in common with what we know by inheritance in an object oriented programming language where an inherited class inherits methods, elements and behaviors from his super class or interface. If an annotation is marked as inherited in Java, using the reserved annotation @Inherited, indicates that the class that is annotated will pass this annotation to its subclasses automatically without need to declare the annotation in the sub classes. By default, a class extending a super class does not inherit its annotations. This is completely in line with the objective of annotations, which is to provide information about the code that they are annotating and not to modify its behavior. We are going to see this using an example that makes thing clearer. First, we define a custom annotation that uses inheritance automatically @Inherited @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.TYPE) public @interface InheritedAnnotation {}We have a super class called AnnotatedSuperClass with the @InheritedAnnotation annotation declared above: @InheritedAnnotation public class AnnotatedSuperClass {public void oneMethod() {}}A subclass extending this one: public class AnnotatedSubClass extends AnnotatedSuperClass {@Override public void oneMethod(){ } }The sub class AnnotatedSubClass shown above will automatically inherit the annotation @InheritedAnnotation. We can see this using the following test with the method isAnnotationPresent() present in the every class:System.out.println( "is true: " + AnnotatedSuperClass.class.isAnnotationPresent( InheritedAnnotation.class ) ); System.out.println( "is true: " + AnnotatedSubClass.class.isAnnotationPresent( InheritedAnnotation.class ) );The output to these lines is: is true: true is true: trueWe can see how the annotation is inherited by the sub class automatically with no need of declaring it. If we try to use this kind of annotation in an interface like this: @InheritedAnnotation public interface AnnotatedInterface {public void oneMethod();}An implementation for it: public class AnnotatedImplementedClass implements AnnotatedInterface {@Override public void oneMethod() {}}And we check the result of the annotation inheritance using the isAnnotationPresent() method:System.out.println( "is true: " + AnnotatedInterface.class.isAnnotationPresent( InheritedAnnotation.class ) ); System.out.println( "is true: " + AnnotatedImplementedClass.class.isAnnotationPresent( InheritedAnnotation.class ) );The result of the previous program will be is true: true is true: falseThis shows how inheritance works in relation with annotations and interfaces: it is just ignored. The implementing class do not inherit the annotation although it is an inherit annotation; it only applies to classes like in the case of the AnnotatedSubClass class above. The @Inherited annotation is only applicable to classes and annotations present in the interfaces have no effect in the implementing classes. The same happens with methods, variables, packages, etc. Only classes can be used in conjunction with this annotation. A very good explanation can be found in the Javadoc of the @Inherited annotation: http://docs.oracle.com/javase/7/docs/api/java/lang/annotation/Inherited.html. Annotations cannot inherit from other annotations; if you try to do this you will get a compilation error: Annotation type declaration cannot have explicit superinterfaces 12. Known libraries using annotations In this chapter we are going to show how very well known Java libraries make usage of annotations. Several libraries like JAXB, Spring Framework, Findbugs, Log4j, Hibernate, and Junit… (The list can be infinite) use them for several different things like code quality analysis, unit testing, XML parsing, dependency injection and many others. In this tutorial we are going to show some of these use cases: 12.1. Junit This framework is used for unit testing in Java. Since its version 4 annotations are widely used and are one of the pillars of Junit design. Basically the Junit processor reads using reflection the classes and suites that may contain unit tests and execute them depending on the annotations found at the beginning of each method or class. There are Junit annotations that modify the way a test is executed; others are used exactly for test execution, for prevention of execution, changing order of execution, etc. The list of possible annotations is very large but we are going to see here the most important ones:@Test: This annotation indicates Junit that the annotated method has to be executed as unit test. It is applicable for methods only (using the target element type METHOD) and is retained in runtime by the Java Virtual Machine (using the retention policy RUNTIME).@Test public void testMe() { //test assertions assertEquals(1,1); } In the example above we can see how to use this kind of annotation in Junit.@Before: the before annotation is used to indicate Junit that the marked method should be executed before every test. This is very useful for set up methods where test context are initialized. Is applicable to methods only:@Before public void setUp() { // initializing variables count = 0; init();}@After: This annotation is used to indicate the Junit processor that all marked methods should be executed after every unit test. This annotation is normally used for destroying, closing or finalizing methods where resources are cleared and reset:@After public void destroy() { // closing input stream stream.close(); }@Ignore: This one indicates Junit that the methods marked should not be executed as unit test. Even if they are annotated as test. They should be just ignored:@Ignore @Test public void donotTestMe() { count = -22; System.out.println( "donotTestMe(): " + count ); } This method should be used during development and debugging phases but it is not common to leave ignored tests once the code is ready to go to production.@FixMethodOrder: Indicates what order of execution should be used, normally the Junit processor takes care of this and the default execution order is completely unknown and random for the programmer. This annotation is not really recommended since Junit methods and tests should be completely independent from each other and the order of execution should not affect the results. However, there are cases and scenarios where the order of unit tests should follow some rules where this annotation may be very useful.@FixMethodOrder( MethodSorters.NAME_ASCENDING ) public class JunitAnnotated There are other test suites and libraries making use of annotations like Mockito or JMock where annotations are used for the creation of test objects and methods expectations. For a complete list of available annotations in Junit https://github.com/junit-team/junit/wiki/Getting-started 12.2. Hibernate ORM Hibernate is probably the most used library for object relational mapping in Java. it provides a framework for mapping object model and relational databases. It makes use of annotations as part of its design. In this chapter we are going to see a couple of annotations provided by Hibernate and to explain how its processor handles them. The snippet bellow has the annotations @Entity and @Table. These are used to indicate the consumer (the Hibernate processor) that the annotated class is an entity bean and indicates what SQL table should be used for the objects of this class. Actually this annotation just explains which one is the primary table; there are annotations for the secondary as well. @Entity @Table( name = "hibernate_annotated" ) public class HibernateAnnotated In the following piece of code we show how to indicate the Hibernate processor that the marked element is a table id with the name “id” and that it should be auto generated (the typical auto increment SQL ids):@Id @GeneratedValue @Column( name = "id" ) private int id;In order to specify a standard SQL table column we can write something like this before an element:@Column( name = "description" ) private String description; This indicates that the marked element is a column with the name “description” in the table specified at the beginning of the class. These annotations belong to the http://docs.oracle.com/javaee/6/api/javax/persistence/package-summary.html package from the Java Enterprise Edition and basically it covers all the available annotations that hibernate uses (or at least the most common ones). 12.3. Spring MVC Spring is a framework widely used for implementing Java Enterprise applications. One of its most important features is the usage of dependency injection in Java programs. Spring uses annotations as an alternative to XML based configuration (the first Spring versions used only XML based configurations). Currently both options are available; you can configure your projects using annotations and XML configuration files. In my opinion both approaches have benefits and inconveniences. Here we are just going to show two of the multiple annotations available in Spring. In the following example: @Component public class DependencyInjectionAnnotation {private String description;public String getDescription() { return description; }@Autowired public void setDescription( String description ) { this.description = description; }} In the snippet above we can find two kinds of annotations applied to the whole class and to a method respectively:@Component: Indicates that the element marked by this annotation, in this case a class, is a candidate for auto detection. This means that the annotated class may be a bean and should be taken into consideration by the Spring container. @Autowired: The Spring container will try to perform byType auto wiring (this is a kind of property matching using the elements type) on this setter method. It can be applied to constructor and properties as well and the actions the Spring container takes in these cases are different.For more information about dependency injection and the Spring framework in general please visit: http://projects.spring.io/spring-framework/. 12.4. Findbugs This library is used in order to measure the quality of the code and provide a list of possibilities to improve it. It checks the code against a list of predefined (or customized) quality violations. Findbugs provide a list of annotations that allow programmers to change its default behavior. Findbugs mainly reads the code (and its contained annotations) using reflection and decides what actions should be taken depending on them. One example is the annotation edu.umd.cs.findbugs.annotations.SuppressFBWarnings that expects a violation key as parameter (one or more values as parameter have to be provided, no need for the key, since it is the default “value” key). It is very similar to the java.lang.SuppressWarnings one. It is used to indicate the Findbugs processor to ignore specific violations while executing the code analysis. Here is an example of use: @SuppressFBWarnings( "HE_EQUALS_USE_HASHCODE" ) public class FindBugsAnnotated {@Override public boolean equals( Object arg0 ) { return super.equals( arg0 ); }} The class above overrides the equals() method of the Object class but does not do the same with the hashCode() method. This is normally a problem because hashCode() and equals() should be override both of them in order not to have problems while using the element as key in a HashMap for example. So Findbugs will create an error entry in the violations report for it. If the annotation @SuppressFBWarnings with the value HE_EQUALS_USE_HASHCODE would not be there the FindBugs processor would throw an error of the type:Bug: com.danibuiza.javacodegeeks.findbugsannotations.FindBugsAnnotated defines equals and uses Object.hashCode() Bug: com.danibuiza.javacodegeeks.findbugsannotations.FindBugsAnnotated defines equals and uses Object.hashCode()This class overrides equals(Object), but does not override hashCode(), and inherits the implementation of hashCode() from java.lang.Object (which returns the identity hash code, an arbitrary value assigned to the object by the VM). Therefore, the class is very likely to violate the invariant that equal objects must have equal hashcodes. If you don’t think instances of this class will ever be inserted into a HashMap/HashTable, the recommended hashCode implementation to use is:public int hashCode() { assert false : "hashCode not designed"; return 42; // any arbitrary constant will do }Rank: Troubling (14), confidence: High Pattern: HE_EQUALS_USE_HASHCODE Type: HE, Category: BAD_PRACTICE (Bad practice) This error contains an explanation of the problem and hints about how to solve it. In this case the solution is basically to implement the hashCode() method as well. For a complete list of all FindBugs violations that can be used as value in the SuppressFBWarnings annotation http://findbugs.sourceforge.net/bugDescriptions.html. 12.5. JAXB JAXB is a library used for conversion and mapping of XML files into Java objects and vice versa. Actually this library comes with the standard JRE and there is no need to download it or configure it in any way. It can be used directly by importing the classes in the package javax.xml.bind.annotation in your applications. JAXB uses annotations to inform its processor (or the JVM) of the XML to code (and viceversa) conversion. For example there are annotations used to indicate XML nodes in the code, XML attributes, values, etc. We are going to see an example: First of all, we declare a class indicating that it should be a node in the XML: import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlType; @XmlType( propOrder = { "brand", "model", "year", "km" } ) @XmlRootElement( name = "Car" ) class Car ... The annotations used here are @XmlType and @XmlRootElement. They inform the JAXB processor that the class Car is going to be a node in the XML produced in the result of the conversion. The @XmlType indicates the order of the properties in the resultant XML. JAXB will perform the proper actions based on these annotations. Apart of a setters and getters for the desired properties, nothing else is needed in this class in order to make it available for conversion. Now we need a consumer program that executes the conversion to XML:Car car = new Car(); car.setBrand( "Mercedes" ); car.setModel( "SLK" ); car.setYear( 2011 ); car.setKm( 15000 );Car carVW = new Car(); carVW.setBrand( "VW" ); carVW.setModel( "Touran" ); carVW.setYear( 2005 ); carVW.setKm( 150000 );/* init jaxb marshaler */ JAXBContext jaxbContext = JAXBContext.newInstance( Car.class ); Marshaller jaxbMarshaller = jaxbContext.createMarshaller();/* set this flag to true to format the output */ jaxbMarshaller.setProperty( Marshaller.JAXB_FORMATTED_OUTPUT, true );/* marshaling of java objects in xml (output to standard output) */ jaxbMarshaller.marshal( car, System.out ); jaxbMarshaller.marshal( carVW, System.out );The output of this program will be something like: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Car> <brand>Mercedes</brand> <model>SLK</model> <year>2011</year> <km>15000</km> </Car> <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Car> <brand>VW</brand> <model>Touran</model> <year>2005</year> <km>150000</km> </Car> There is a list of annotations that can be used in JAXB for XML to Java conversion. More information can be found in https://jaxb.java.net/ 13. Summary In this article we explained that annotations in Java are a very important feature available since the update 5, and we listed several uses cases where they can be used.  Basically annotations are metadata that contain information about the marked code. They do not change or affect the code by any meaning and they can be used by 3rd applications called consumers in order to analyze the code using reflection. We listed the built in annotations that are available per default in Java, some of them also called meta annotations like @Target or @Retention, or others like @Override or @SuppressWarnings, and the new features coming out in Java 8 related to annotations like the @Repeteable annotation, the @FunctionalInterface annotation and the annotations by type. We also shown a couple of code examples where annotations were used in combination with reflection and we mentioned and described several real life libraries that are doing extensive use of annotations in Java like Junit, Spring or Hibernate. Annotations are a very powerful mechanism in Java to analyze the Meta data of any kind of programs and can be applicable in different scopes like validation, dependency injection or unit testing. 14. Download This was a tutorial on Java annotations. Download You can download the full source code of this tutorial here: customAnnotations 15. Resources Here is a list of very useful resources related to Java annotations:Official Java annotations site: http://docs.oracle.com/javase/tutorial/java/annotations/ Wikipedia article about annotations in Java: http://en.wikipedia.org/wiki/Java_annotation Java Specification Request 250: http://en.wikipedia.org/wiki/JSR_250 Annotations white paper from Oracle: http://www.oracle.com/technetwork/articles/hunter-meta-096020.html Annotations API: http://docs.oracle.com/javase/7/docs/api/java/lang/annotation/package-summary.html...

Using the Neo4j browser with Embedded Neo4j

There are times when you have an application using Neo4j in embedded mode but also need to play around with the graph using the Neo4j web browser. Since the database can be accessed from at most one process at a time, trying to start up the Neo4j server when your embedded Neo4j  application is running won’t work. The WrappingNeoServerBootstrapper, although deprecated, comes to the rescue. Here’s how to set it up.           1. Make sure you have these maven dependencies <dependency> <groupId>org.neo4j</groupId> <artifactId>neo4j</artifactId> <version>2.1.5</version> </dependency> <dependency> <groupId>org.neo4j.app</groupId> <artifactId>neo4j-server</artifactId> <version>2.1.5</version> </dependency> <dependency> <groupId>org.neo4j.app</groupId> <artifactId>neo4j-server</artifactId> <version>2.1.5</version> <classifier>static-web</classifier> </dependency> 2. Start the WrappingNeoServerBootstrapperpublic static void connectAndStartBootstrapper() { WrappingNeoServerBootstrapper neoServerBootstrapper; GraphDatabaseService db = new GraphDatabaseFactory() .newEmbeddedDatabaseBuilder("/path/to/db").newGraphDatabase(); registerShutdownHook(db); try { GraphDatabaseAPI api = (GraphDatabaseAPI) db; ServerConfigurator config = new ServerConfigurator(api); config.configuration() .addProperty(Configurator.WEBSERVER_ADDRESS_PROPERTY_KEY, ""); config.configuration() .addProperty(Configurator.WEBSERVER_PORT_PROPERTY_KEY, "7575");neoServerBootstrapper = new WrappingNeoServerBootstrapper(api, config); neoServerBootstrapper.start(); catch(Exception e) { //handle appropriately } }Two things happen here- the GraphDatabaseService is ready to use in embedded mode, and the Neo4j web browser is available for use on You need not start them together but instead start and stop the WrappingNeoServerBootstrapper on demand, you just need to have a handle to the GraphDatabaseService. Again, note that the WrappingNeoServerBootstrapper is deprecated. At the time of writing, this code works on 2.1.5 but does not offer any guarantees for future releases of Neo4j.Reference: Using the Neo4j browser with Embedded Neo4j from our JCG partner Luanne Misquitta at the Thought Bytes blog....

Five Reasons Why You Should Keep Your Package Dependencies Cycle Free

If you are so unlucky to work with me in a project, you will suffer from the rule that all package dependencies must be cycle free. I will not only require this, but I will also create a unit test ensuring it using Degraph. Here are the reasons why I think a cycle free package structure is beneficial for a project.              Helpful abstractions: If you implement stuff without thinking to much about dependencies you end up with cyclic dependencies almost for sure. In order to break those cycles you often have to introduce new abstractions in the form of interfaces. These interfaces often turnout to create a cleaner abstraction of what is going on in the application than the direct dependency that was there before.For example consider two packages Something and Other that depend on each other. As it is described, there is no way to tell why they depend on each other. But in order to break one of the dependencies you might decide to introduce an interface. The name of that interface might include valuable additional information about the relationship of the two. Imagine the interface ends up being named SomethingDeletionListener and located in Somehting and implemented in Other. This already tells you something about the relationship of the two packages, doesn’t it? Clean Orthogonal Package Structure: Whenever you organize something in a tree like structure you probably want an orthogonal structure in that tree. This means on all subbranches of a branch are elements of single categorization. A good example is Customer, Order, Wishlist a different, also good example is UserInterface, Persistence, Domain. These kinds of structures gives a clear indication where a class belongs. If you mix the two approaches you end up with something like Customer, Order, Persistence. In such a structure it is not at all clear where classes for the persistence of customers belong. The result is a mess, which typically results in cycles in the dependencies, since a question like should Customer depend on Persistence or the other way around doesn’t even make sense. Enables reuse: Ever tried to reuse a package or even just a single class from a project that doesn’t care about dependencies? I tried. In 9 out of 10 cases I had two choices: Either take the complete project (not really an option), or do some heavy refactoring of the class before it even compiles without all the other stuff in the project. On the other hand in projects where package dependencies form a nice directed acyclic graph, it is perfectly clear what has to go with the class. Also the stuff people are interested in reusing is typically close to the leaves of the graph and can be extracted on it’s own or with very few dependencies. Enables partial rewrites: Sometimes an idea once considered great turns out to be a really bad one. Sometimes it is so bad, you want to redo it. Acyclic dependencies limit the amount of code affected by the change. With cyclic dependencies often the complete application is at least in danger of being affected. Independent deployment: On the other hand, sometimes ideas actually turn out to be great. Maybe so great that they get used so heavily, that you need to scale it up and deploy it on three additional servers on its own, to handle the heavy load. Good luck in splitting your application in two or more parts that can be deployed separately when you have tangles between the packages. With a cycle free structure, the places where you can cut should be rather obvious.Reference: Five Reasons Why You Should Keep Your Package Dependencies Cycle Free from our JCG partner Jens Schauder at the Schauderhaft blog....

Batching (collapsing) requests in Hystrix

Hystrix has an advanced feature of collapsing (or batching) requests. If two or more commands run similar request at the same time, Hystrix can combine them together, run one batched request and dispatch split results back to all commands. Let’s first see how Hystrix works without collapsing. Imagine we have a service that looks up StockPriceof a given Ticker:             import lombok.Value; import java.math.BigDecimal; import java.time.Instant;@Value class Ticker { String symbol; }@Value class StockPrice { BigDecimal price; Instant effectiveTime; }interface StockPriceGateway {default StockPrice load(Ticker stock) { final Set<Ticker> oneTicker = Collections.singleton(stock); return loadAll(oneTicker).get(stock); }ImmutableMap<Ticker, StockPrice> loadAll(Set<Ticker> tickers); }Core implementation of StockPriceGateway must provide loadAll() batch method while load() method is implemented for our convenience. So our gateway is capable of loading multiple prices in one batch (e.g. to reduce latency or network protocol overhead), but at the moment we are not using this feature, always loading price of one stock at a time: class StockPriceCommand extends HystrixCommand<StockPrice> {private final StockPriceGateway gateway; private final Ticker stock;StockPriceCommand(StockPriceGateway gateway, Ticker stock) { super(HystrixCommandGroupKey.Factory.asKey("Stock")); this.gateway = gateway; this.stock = stock; }@Override protected StockPrice run() throws Exception { return gateway.load(stock); } }Such command will always call StockPriceGateway.load() for each and every Ticker, as illustrated by the following tests: class StockPriceCommandTest extends Specification {def gateway = Mock(StockPriceGateway)def 'should fetch price from external service'() { given: gateway.load(TickerExamples.any()) >> StockPriceExamples.any() def command = new StockPriceCommand(gateway, TickerExamples.any())when: def price = command.execute()then: price == StockPriceExamples.any() }def 'should call gateway exactly once when running Hystrix command'() { given: def command = new StockPriceCommand(gateway, TickerExamples.any())when: command.execute()then: 1 * gateway.load(TickerExamples.any()) }def 'should call gateway twice when command executed two times'() { given: def commandOne = new StockPriceCommand(gateway, TickerExamples.any()) def commandTwo = new StockPriceCommand(gateway, TickerExamples.any())when: commandOne.execute() commandTwo.execute()then: 2 * gateway.load(TickerExamples.any()) }def 'should call gateway twice even when executed in parallel'() { given: def commandOne = new StockPriceCommand(gateway, TickerExamples.any()) def commandTwo = new StockPriceCommand(gateway, TickerExamples.any())when: Future<StockPrice> futureOne = commandOne.queue() Future<StockPrice> futureTwo = commandTwo.queue()and: futureOne.get() futureTwo.get()then: 2 * gateway.load(TickerExamples.any()) }}If you don’t know Hystrix, by wrapping an external call in a command you gain a lot of features like timeouts, circuit breakers, etc. But this is not the focus of this article. Look at last two tests: when asking for price of arbitrary ticker twice, sequentially or in parallel (queue()), our external gateway is also called twice. Last test is especially interesting – we ask for the same ticker at almost the same time, but Hystrix can’t figure that out. These two commands are fully independent, will be executed in different threads and don’t know anything about each other – even though they run at almost the same time. Collapsing is all about finding such similar requests and combining them. Batching (I will use this term interchangeably with collapsing) doesn’t happen automatically and requires a bit of coding. But first let’s see how it behaves: def 'should collapse two commands executed concurrently for the same stock ticker'() { given: def anyTicker = TickerExamples.any() def tickers = [anyTicker] as Setand: def commandOne = new StockTickerPriceCollapsedCommand(gateway, anyTicker) def commandTwo = new StockTickerPriceCollapsedCommand(gateway, anyTicker)when: Future<StockPrice> futureOne = commandOne.queue() Future<StockPrice> futureTwo = commandTwo.queue()and: futureOne.get() futureTwo.get()then: 0 * gateway.load(_) 1 * gateway.loadAll(tickers) >> ImmutableMap.of(anyTicker, StockPriceExamples.any()) }def 'should collapse two commands executed concurrently for the different stock tickers'() { given: def anyTicker = TickerExamples.any() def otherTicker = TickerExamples.other() def tickers = [anyTicker, otherTicker] as Setand: def commandOne = new StockTickerPriceCollapsedCommand(gateway, anyTicker) def commandTwo = new StockTickerPriceCollapsedCommand(gateway, otherTicker)when: Future<StockPrice> futureOne = commandOne.queue() Future<StockPrice> futureTwo = commandTwo.queue()and: futureOne.get() futureTwo.get()then: 1 * gateway.loadAll(tickers) >> ImmutableMap.of( anyTicker, StockPriceExamples.any(), otherTicker, StockPriceExamples.other()) }def 'should correctly map collapsed response into individual requests'() { given: def anyTicker = TickerExamples.any() def otherTicker = TickerExamples.other() def tickers = [anyTicker, otherTicker] as Set gateway.loadAll(tickers) >> ImmutableMap.of( anyTicker, StockPriceExamples.any(), otherTicker, StockPriceExamples.other())and: def commandOne = new StockTickerPriceCollapsedCommand(gateway, anyTicker) def commandTwo = new StockTickerPriceCollapsedCommand(gateway, otherTicker)when: Future<StockPrice> futureOne = commandOne.queue() Future<StockPrice> futureTwo = commandTwo.queue()and: def anyPrice = futureOne.get() def otherPrice = futureTwo.get()then: anyPrice == StockPriceExamples.any() otherPrice == StockPriceExamples.other() }First test proves that instead of calling load() twice we barely called loadAll() once. Also notice that since we asked for the same Ticker (from two different threads), loadAll() asks for only one ticker. Second test shows two concurrent requests for two different tickers being collapsed into one batch call. Third test makes sure we still get proper responses to each individual request. Instead of extending HystrixCommand we must extend more complexHystrixCollapser. Now it’s time to see StockTickerPriceCollapsedCommand implementation, that seamlessly replaced StockPriceCommand: class StockTickerPriceCollapsedCommand extends HystrixCollapser<ImmutableMap<Ticker, StockPrice>, StockPrice, Ticker> {private final StockPriceGateway gateway; private final Ticker stock;StockTickerPriceCollapsedCommand(StockPriceGateway gateway, Ticker stock) { super(HystrixCollapser.Setter.withCollapserKey(HystrixCollapserKey.Factory.asKey("Stock")) .andCollapserPropertiesDefaults(HystrixCollapserProperties.Setter().withTimerDelayInMilliseconds(100))); this.gateway = gateway; this.stock = stock; }@Override public Ticker getRequestArgument() { return stock; }@Override protected HystrixCommand<ImmutableMap<Ticker, StockPrice>> createCommand(Collection<CollapsedRequest<StockPrice, Ticker>> collapsedRequests) { final Set<Ticker> stocks = collapsedRequests.stream() .map(CollapsedRequest::getArgument) .collect(toSet()); return new StockPricesBatchCommand(gateway, stocks); }@Override protected void mapResponseToRequests(ImmutableMap<Ticker, StockPrice> batchResponse, Collection<CollapsedRequest<StockPrice, Ticker>> collapsedRequests) { collapsedRequests.forEach(request -> { final Ticker ticker = request.getArgument(); final StockPrice price = batchResponse.get(ticker); request.setResponse(price); }); }}A lot is going on here, so let’s review StockTickerPriceCollapsedCommand step by step. First three generic types:BatchReturnType (ImmutableMap<Ticker, StockPrice> in our example) is the type of batched command response. As you will see later, collapser turns multiple small commands into a batch command. This is the type of that batch command’s response. Notice that it’s the same as StockPriceGateway.loadAll() type). ResponseType (StockPrice) is the type of each individual command being collapsed. In our case we are collapsing HystrixCommand<StockPrice>. Later we will split value of BatchReturnType into multiple StockPrice. RequestArgumentType (Ticker) is the input of each individual command we are about to collapse (batch). When multiple commands are batched together, we are eventually replacing all of them with one batched command. This command should receive all individual requests in order to perform one batch request.withTimerDelayInMilliseconds(100) will be explained soon. createCommand() creates a batch command. This command should replace all individual commands and perform batched logic. In our case instead of multiple individualload() calls we just make one: class StockPricesBatchCommand extends HystrixCommand<ImmutableMap<Ticker, StockPrice>> {private final StockPriceGateway gateway; private final Set<Ticker> stocks;StockPricesBatchCommand(StockPriceGateway gateway, Set<Ticker> stocks) { super(HystrixCommandGroupKey.Factory.asKey("Stock")); this.gateway = gateway; this.stocks = stocks; }@Override protected ImmutableMap<Ticker, StockPrice> run() throws Exception { return gateway.loadAll(stocks); } }The only difference between this class and StockPriceCommand is that it takes a bunch of Tickers and returns prices for all of them. Hystrix will collect a few instances of StockTickerPriceCollapsedCommand and once it has enough(more on that later) it will create single StockPriceCommand. Hope this is clear, because mapResponseToRequests()is slightly more involved. Once our collapsed StockPricesBatchCommand finishes, we must somehow split batch response and communicate replies back to individual commands, unaware of collapsing. From that perspectivemapResponseToRequests() implementation is fairly straightforward: we receive batch response and a collection of wrapped CollapsedRequest<StockPrice, Ticker>. We must now iterate over all awaiting individual requests and complete them (setResponse()). If we don’t complete some of the requests, they will hang infinitely and eventually time out. How it works This is the right moment to describe how collapsing is implemented. I said before that collapsing happens when two requests occur at the same time. There is no such thing as the same time. In reality when first collapsible request comes in, Hystrix starts a timer. In our examples we set it to 100 milliseconds. During that period our command is suspended, waiting for other commands to join. After this configurable period Hystrix will call createCommand(), gathering all request keys (by calling getRequestArgument()) and run it. When batched command finishes, it will let us dispatch results to all awaiting individual commands. It is also possible to limit the number of collapsed requests if we are afraid of creating humongous batch – on the other hand how many concurrent requests can fit within this short time slot? Use cases and drawbacks Request collapsing should be used in systems with extreme load – high frequency of requests. If you get just one request per collapsing time window (100 milliseconds in examples), collapsing will just add overhead. That’s because every time you call collapsible command, it must wait just in case some other command wants to join and form batch. This makes sense only when at least couple of commands are collapsed. Time wasted for waiting is balanced by savings in network latency and/or better utilization of resources in our collaborator (very often batch requests are much faster compared to individual calls). But keep in mind collapsing is a double edged sword, useful in specific cases. Last thing to remember – in order to use request collapsing you needHystrixRequestContext.initializeContext() and shutdown() in try-finally block: HystrixRequestContext context = HystrixRequestContext.initializeContext(); try { //... } finally { context.shutdown(); }Collapsing vs. caching You might think that collapsing can be replaced with proper caching. This is not true. You use cache when:resource is likely to be accessed multiple times we can safely use previous value, it will remain valid for some period of time or we know precisely how to invalidate it we can afford concurrent requests for the same resource to compute it multiple timesOn the other hand collapsing does not enforce locality of data (1), it always hits the real service and never returns stale data (2). And finally if we ask for the same resource from multiple threads, we will only call backing service once (3). In case of caching, unless your cache is really smart, two threads will independently discover absence of given resource in cache and ask backing service twice. However collapsing can work together with caching – by consulting cache before running collapsible command. Summary Request collapsing is a useful tool, but with very limited use cases. It can significantly improve throughput in our system as well as limit load in external service. Collapsing can magically flatten peaks in traffic, rather than spreading it all over. Just make sure you are using it for commands running with extreme frequency.Reference: Batching (collapsing) requests in Hystrix from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

STOMP over WebSocket

STOMP is Simple Text Oriented Messaging Protocol. It defines an interoperable wire format that allows a STOMP client to communicate with any STOMP message broker. This provides easy and widespread messaging interoperability among different languages, platforms and brokers. The specification defines what makes it different from other messaging protocols: It is an alternative to other open messaging protocols such as AMQP and implementation specific wire protocols used in JMS brokers such as OpenWire. It distinguishes itself by covering a small subset of commonly used messaging operations rather than providing a comprehensive messaging API. STOMP is a frame-based protocol. A frame consists of a command, a set of optional headers and an optional body. Commonly used commands are:CONNECT SEND SUBSCRIBE UNSCUBSCRIBE ACK NACK DISCONNECTWebSocket messages are also transmitted as frames. STOMP over WebSocket maps STOMP frames to WebSocket frames. Different messaging servers like HornetQ, ActiveMQ, RabbitMQ, and others provide native support for STOMP over WebSocket. Lets take a look at a simple sample on how to use STOMP over WebSocket using ActiveMQ. The source code for the sample is available at github.com/arun-gupta/wildfly-samples/tree/master/websocket-stomp. Lets get started!Download ActiveMQ 5.10 or provision an ActiveMQ instance in OpenShift as explained at github.com/arun-gupta/activemq-openshift-cartridge. workspaces> rhc app-create activemq diy --from-code=git://github.com/arun-gupta/activemq-openshift-cartridge.git Using diy-0.1 (Do-It-Yourself 0.1) for 'diy'Application Options ------------------- Domain: milestogo Cartridges: diy-0.1 Source Code: git://github.com/arun-gupta/activemq-openshift-cartridge.git Gear Size: default Scaling: noCreating application 'activemq' ... doneDisclaimer: This is an experimental cartridge that provides a way to try unsupported languages, frameworks, and middleware on OpenShift.Waiting for your DNS name to be available ... doneCloning into 'activemq'... Warning: Permanently added the RSA host key for IP address '' to the list of known hosts.Your application 'activemq' is now available.URL: http://activemq-milestogo.rhcloud.com/ SSH to: 545b096a500446e6710004ae@activemq-milestogo.rhcloud.com Git remote: ssh://545b096a500446e6710004ae@activemq-milestogo.rhcloud.com/~/git/activemq.git/ Cloned to: /Users/arungupta/workspaces/activemqRun 'rhc show-app activemq' for more details about your app. workspaces> rhc port-forward activemq Checking available ports ... done Forwarding ports ...To connect to a service running on OpenShift, use the Local addressService Local OpenShift ------- --------------- ---- ----------------- java => java => java => java => java => java => CTRL-C to terminate port forwardingDownload WildFly 8.1 zip, unzip, and start as bin/standalone.sh Clone the repo and deploy the sample on WildFly: git clone https://github.com/arun-gupta/wildfly-samples.git cd wildfly-samples mvn wildfly:deployAccess the application at localhost:8080/websocket-stomp-1.0-SNAPSHOT/ to see the page as:  Specify text payload “foobar and usse ActiveMQ conventions for topics and queues to specify a queue name as “/queue/myQ1″. Click on Connect, Send Message, Subscribe, and Disconnect buttons one after the other. This will display messages on your browser window where WebSocket connection is established, STOMP message is sent to the queue, subscribed to the queue to receive the message, and then finally disconnected.STOMP frames can be seen using Chrome Developer Tools as shown:   As you can see, each STOMP frame is mapped to a WebSocket frame.In short, ActiveMQ on OpenShift is running a STOMP broker on port 61614 and is accessible on localhost:61614 by port-forwarding. Clicking on Connect button uses the Stomp library bundled with the application to establish a WebSocket connection with ws://localhost:61614/. Subsequent buttons send STOMP frames over WebSocket as shown in the Frames tab of Developer Tools. Read more details about how all the pieces work together at jmesnil.net/stomp-websocket/doc/. Jeff has also written an excellent book explaining STOMP over WebSocket and lot more other interesting things that can be done over WebSocket in his Mobile and Web Messaging book.Reference: STOMP over WebSocket from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: