Featured FREE Whitepapers

What's New Here?

spring-interview-questions-answers

Spring Data JPA Tutorial: Introduction

Creating repositories that use the Java Persistence API is a cumbersome process that takes a lot of time and requires a lot of boilerplate code. We can eliminate some boilerplate code by following these steps:                Create an abstract base repository class that provides CRUD operations for entities. Create the concrete repository class that extends the abstract base repository class.The problem of this approach is that we still have to write the code that creates our database queries and invokes them. To make matters worse, we have to do this every time when we want to create a new database query. This is a waste of time. What would you say if I would tell you that we can create JPA repositories without writing any boilerplate code? The odds are that you might not believe me, but Spring Data JPA helps us to do just that. The website of the Spring Data JPA project states that: Implementing a data access layer of an application has been cumbersome for quite a while. Too much boilerplate code has to be written to execute simple queries as well as perform pagination, and auditing. Spring Data JPA aims to significantly improve the implementation of data access layers by reducing the effort to the amount that’s actually needed. As a developer you write your repository interfaces, including custom finder methods, and Spring will provide the implementation automaticallyThis blog post provides an introduction to Spring Data JPA. We will learn what Spring Data JPA really is and take a quick look at the Spring Data repository interfaces. Let’s get started. What Spring Data JPA Is? Spring Data JPA is not a JPA provider. It is a library / framework that adds an extra layer of abstraction on the top of our JPA provider. If we decide to use Spring Data JPA, the repository layer of our application contains three layers that are described in the following:Spring Data JPA provides support for creating JPA repositories by extending the Spring Data repository interfaces. Spring Data Commons provides the infrastructure that is shared by the datastore specific Spring Data projects. The JPA Provider implements the Java Persistence API.The following figure illustrates the structure of our repository layer:Additional Reading:Spring Data JPA versus JPA: What’s the difference?At first it seems that Spring Data JPA makes our application more complicated, and in a way that is true. It does add an additional layer to our repository layer, but at the same time it frees us from writing any boilerplate code. That sounds like a good tradeoff. Right? Introduction to Spring Data Repositories The power of Spring Data JPA lies in the repository abstraction that is provided by the Spring Data Commons project and extended by the datastore specific sub projects. We can use Spring Data JPA without paying any attention to the actual implementation of the repository abstraction, but we have to be familiar with the Spring Data repository interfaces. These interfaces are described in the following: First, the Spring Data Commons project provides the following interfaces:The Repository<T, ID extends Serializable> interface is a marker interface that has two purposes:It captures the type of the managed entity and the type of the entity’s id. It helps the Spring container to discover the “concrete” repository interfaces during classpath scanning.The CrudRepository<T, ID extends Serializable> interface provides CRUD operations for the managed entity. The PagingAndSortingRepository<T, ID extends Serializable> interface declares the methods that are used to sort and paginate entities that are retrieved from the database. The QueryDslPredicateExecutor<T> interface is not a “repository interface”. It declares the methods that are used to retrieve entities from the database by using QueryDsl Predicate objects.Second, the Spring Data JPA project provides the following interfaces:The JpaRepository<T, ID extends Serializable> interface is a JPA specific repository interface that combines the methods declared by the common repository interfaces behind a single interface. The JpaSpecificationExecutor<T> interface is not a “repository interface”. It declares the methods that are used to retrieve entities from the database by using Specification<T> objects that use the JPA criteria API.The repository hierarchy looks as follows:That is nice, but how can we use them? That is a fair question. The next parts of this tutorial will answer to that question, but essentially we have to follow these steps:Create a repository interface and extend one of the repository interfaces provided by Spring Data. Add custom query methods to the created repository interface (if we need them that is). Inject the repository interface to another component and use the implementation that is provided automatically by Spring.Let’s move on and summarize what we learned from this blog post. Summary This blog post has taught us two things:Spring Data JPA is not a JPA provider. It simply “hides” the Java Persistence API (and the JPA provider) behind its repository abstraction. Spring Data provides multiple repository interfaces that are used for different purposes.The next part of this tutorial describes how we can get the required dependencies. If you want to know more about Spring Data JPA, you should read my Spring Data JPA Tutorial.Reference: Spring Data JPA Tutorial: Introduction from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
software-development-2-logo

Can MicroServices Architecture Solve All Your Problems?

IT is one field where you can find new things coming everyday. Theses days the whole developer community websites are flooded with MicroServices and Docker related stuff. Among them the idea of MicroServices is very exciting and encourages better way of building software systems. But as with any architectural style there will be pros and cons to every approach. Before discussing what are good and bad sides of MicroServices approach, first let me say what I understood about MicroServices.    MicroServices architecture encourage to build small, focused subsystems which can be integrated into the whole system preferably using REST protocol.Now lets discuss on various aspects of MicroServices architecture. The dream of every software architect/developer : First of all the idea of MicroServices is not new at all. From the very beginning our elders suggest to write classes focusing on one Single Responsibility and write methods to do one particular thing and do it well. Also we were encouraged to build separate modules which can perform some functionally related tasks. Then we bundle all these separate modules together and build an application delegating the appropriate tasks to respective modules. This is what we try to do for many year. But the idea of MicroServices took this approach to next level where you can deploy each module as an individual deployable unit and each service can communicate with any other Service based on some agreed protocol (preferably REST, another trendy cool thing!). So what are the advantages of this MicroServices architectures? There are plenty:You will have many small services with manageable codebases which is easy to read and understand. You can confidently refactor or rewrite entire service because there won’t be any impact on other services. Each microservice can be deployed independently so that adding new features or upgrading any existing software/hardware platform won’t affect other services. You can easily adopt the next cool technology. If one of microservices is very critical service and performance is the highest priority then we can write that particular service using Scala in order to leverage your multi-core hardware support. If you are a service provider company you can sell each service separately possibly making better money compared to selling whole monolithic product. And most important factor is, the term MicroService is cool!What is the other side of MicroServices architecture? As with any approach, MicroServices also has some down sides and associated cost.“Great power comes with great responsibility”. – Uncle Ben Let us see what are the challenges to implement a system using MicroServices architecture. The idea of MicroServices is very simple but very complex to implement in reality. In a monolithic system, the communication between various subsystems are mostly direct object communication. But in MicroServices based system, in order to communicate with other services you may use REST services which means additional HTTP call overhead and its inherent issues like network latency, possible communication failures etc. So we need to consider various aspects while implementing inter-service communication logic such as retry, fail-over and service down scenarios etc. How good is your DevOps infrastructure? In order to go with MicroServices architecture, organization should have a good DevOps team to properly maintain the dozens of MicroService applications. Do your organization has DevOps culture? Or your organization has the problem of blame game between Devs and Ops? If your organization doesn’t have a good DevOps culture and software/hardware resources then MicroServices architecture will be much more difficult to adopt. Are we fixing the actual problem at all? Now many people are saying MicroServices architecture is better than Monolithic architecture. But is Monolithic architecture is the actual reason why many projects are failing? Will MicroServices architecture save the projects from failing? I guess NO. Think, what were the reasons for your previously failed projects. Are those projects failed because of technology issues or people issues? I have never seen a project which is failed because of the wrong technology selection, or wrong architectural approach. But I have seen many projects failing just because of problems with people. I feel there are more severe issues than architecture issues which are causing projects to be failed such as:Having developers without sufficient skills Having developers who don’t want to learn anything new Having developers who don’t have courage to say “NO, we can’t do that in that time” Having Architects who abandoned coding years ago Having Architects who think they know everything and don’t need to listen to their developers pain Having Managers who just blame the developers for not meeting the imposed deadlines without ever asking the  developers for time-linesThese are the real problems which are really causing the project failures. Now do you think just moving to MicroServices architecture saves the IT without fixing these problems? Continuously innovating new ways of doing things is awesome and is required to move ahead. At the same time assuming “the next cool methodology/technology will fix all the problems is also wrong”.So those of you who are just about to jump on MicroServices boat..THINK. FIX THE REAL PROBLEMS FIRST. You can’t fill a bottle which has a hole at it’s bottom.Reference: Can MicroServices Architecture Solve All Your Problems? from our JCG partner Siva Reddy at the My Experiments on Technology blog....
software-development-2-logo

What I look for in frameworks

In every project the discussion comes up over and over again: should we use framework X? or Y? or no framework at all? Even when you limit yourself to the frameworks for web development in the Java space the choices are so plentiful, nobody can know them all. So I need a quick way do identify which frameworks sound promising to me and which I keep for weekend projects.            Stay away from the new kid on the block. While it might be fun to play with the coolest newest thing, I work on projects that have a life cycle of 10-30 years. I wouldn’t want to support an application using some library that was cool between March and July in 1996. Therefore I try not to put that kind of burden on others. Do one thing and do it well. A bad example for this is Hibernate/JPA. It does (or tries to) provide all of the followingmapping between a relational and an object-oriented model caching change detection caching query dslIt is kind of ok for a framework or library to provide multiple services, if you can decide on each service separately if you want to use it or not. But if it controls to many aspects of your project, the chance that it doesn’t do anything well gets huge. And you won’t be able to exchange it easily, because now you have to replace half a dozen libraries at once. Method calls are cool. Annotations are ok. Byte code manipulation is scary. Code generation a reason to run for the hills. In the list only method calls can be abstracted over properly. All the other stuff tends to get in your way. Annotations are kind of harmless, but it is easy to get in situations where you have more annotations than actual code. Byte code manipulation starts to put some serious constraints on what you can do in your code. And code generation additional slows down your build process. Keep the fingers of my domain model. The domain model is really the important part of an application. I can change the persistence or the ui of an application, but if I have to rework the domain model, everything changes and I’m essential rewriting the application. Also I need all the flexibility the programming language of choice offers to design the domain model. I don’t want to get restricted by some stupid framework that requires default constructors or getters and setters for all fields. Can we handle it? There are many things that sound really awesome, but they require a so different style of coding, that many developers will have a hard time tackling it. And just because I think I can handle it, doesn’t necessarily mean I actually can. So better stay simple and old-fashioned.Reference: What I look for in frameworks from our JCG partner Jens Schauder at the Schauderhaft blog....
scala-logo

Spark: Write to CSV file with header using saveAsFile

In my last blog post I showed how to write to a single CSV file using Spark and Hadoop and the next thing I wanted to do was add a header row to the resulting row. Hadoop’s FileUtil#copyMerge function does take a String parameter but it adds this text to the end of each partition file which isn’t quite what we want. However, if we copy that function into our own FileUtil class we can restructure it to do what we want:       import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.*; import org.apache.hadoop.io.IOUtils; import java.io.IOException;   public class MyFileUtil { public static boolean copyMergeWithHeader(FileSystem srcFS, Path srcDir, FileSystem dstFS, Path dstFile, boolean deleteSource, Configuration conf, String header) throws IOException { dstFile = checkDest(srcDir.getName(), dstFS, dstFile, false); if(!srcFS.getFileStatus(srcDir).isDir()) { return false; } else { FSDataOutputStream out = dstFS.create(dstFile); if(header != null) { out.write((header + "\n").getBytes("UTF-8")); }   try { FileStatus[] contents = srcFS.listStatus(srcDir);   for(int i = 0; i < contents.length; ++i) { if(!contents[i].isDir()) { FSDataInputStream in = srcFS.open(contents[i].getPath());   try { IOUtils.copyBytes(in, out, conf, false);   } finally { in.close(); } } } } finally { out.close(); }   return deleteSource?srcFS.delete(srcDir, true):true; } }   private static Path checkDest(String srcName, FileSystem dstFS, Path dst, boolean overwrite) throws IOException { if(dstFS.exists(dst)) { FileStatus sdst = dstFS.getFileStatus(dst); if(sdst.isDir()) { if(null == srcName) { throw new IOException("Target " + dst + " is a directory"); }   return checkDest((String)null, dstFS, new Path(dst, srcName), overwrite); }   if(!overwrite) { throw new IOException("Target " + dst + " already exists"); } } return dst; } } We can then update our merge function to call this instead: def merge(srcPath: String, dstPath: String, header:String): Unit = { val hadoopConfig = new Configuration() val hdfs = FileSystem.get(hadoopConfig) MyFileUtil.copyMergeWithHeader(hdfs, new Path(srcPath), hdfs, new Path(dstPath), false, hadoopConfig, header) } We call merge from our code like this: merge(file, destinationFile, "type,count") I wasn’t sure how to import my Java based class into the Spark shell so I compiled the code into a JAR and submitted it as a job instead: $ sbt package [info] Loading global plugins from /Users/markneedham/.sbt/0.13/plugins [info] Loading project definition from /Users/markneedham/projects/spark-play/playground/project [info] Set current project to playground (in build file:/Users/markneedham/projects/spark-play/playground/) [info] Compiling 3 Scala sources to /Users/markneedham/projects/spark-play/playground/target/scala-2.10/classes... [info] Packaging /Users/markneedham/projects/spark-play/playground/target/scala-2.10/playground_2.10-1.0.jar ... [info] Done packaging. [success] Total time: 8 s, completed 30-Nov-2014 08:12:26   $ time ./bin/spark-submit --class "WriteToCsvWithHeader" --master local[4] /path/to/playground/target/scala-2.10/playground_2.10-1.0.jar Spark assembly has been built with Hive, including Datanucleus jars on classpath Using Spark's default log4j profile: org/apache/spark/log4j-defaults.propertie ... 14/11/30 08:16:15 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 14/11/30 08:16:15 INFO SparkContext: Job finished: saveAsTextFile at WriteToCsvWithHeader.scala:49, took 0.589036 s   real 0m13.061s user 0m38.977s sys 0m3.393s And if we look at our destination file: $ cat /tmp/singlePrimaryTypes.csv type,count THEFT,859197 BATTERY,757530 NARCOTICS,489528 CRIMINAL DAMAGE,488209 BURGLARY,257310 OTHER OFFENSE,253964 ASSAULT,247386 MOTOR VEHICLE THEFT,197404 ROBBERY,157706 DECEPTIVE PRACTICE,137538 CRIMINAL TRESPASS,124974 PROSTITUTION,47245 WEAPONS VIOLATION,40361 PUBLIC PEACE VIOLATION,31585 OFFENSE INVOLVING CHILDREN,26524 CRIM SEXUAL ASSAULT,14788 SEX OFFENSE,14283 GAMBLING,10632 LIQUOR LAW VIOLATION,8847 ARSON,6443 INTERFERE WITH PUBLIC OFFICER,5178 HOMICIDE,4846 KIDNAPPING,3585 INTERFERENCE WITH PUBLIC OFFICER,3147 INTIMIDATION,2471 STALKING,1985 OFFENSES INVOLVING CHILDREN,355 OBSCENITY,219 PUBLIC INDECENCY,86 OTHER NARCOTIC VIOLATION,80 RITUALISM,12 NON-CRIMINAL,12 OTHER OFFENSE ,6 NON - CRIMINAL,2 NON-CRIMINAL (SUBJECT SPECIFIED),2 Happy days!The code is available as a gist if you want to see all the details.Reference: Spark: Write to CSV file with header using saveAsFile from our JCG partner Mark Needham at the Mark Needham Blog blog....
java-interview-questions-answers

Developing with WSO2

Since a few months I am back working with WSO2 products. In the upcoming posts I describe some of the (small) issues I ran into and how to solve them. The first thing I did when setting up my development environment was downloading the Developer Studio (64-bit version) on my Mac. After unzipping the downloaded zip file you see the Eclipse icon like this:    However after double clicking the icon Eclipse doesn’t start… Great way to start with the product ! Although this issue is described on their installation page this isn’t exactly the information I read before starting a Java IDE like Eclipse. But it turns out that you have to adjust the file permissions for the Studio to work on a Mac. Just open up a Terminal, browse to the installation directory and modify the permission like this:After fixing the permission you will see the familiair Eclipse applicationafter double clicking the icon:Another small issue you just have to know how to work with it is with the Management Console that comes with the WSO2 product. As an example here is the Management Console for the WSO2 ESB:Now when you want to edit the source of a certain item in the ESB you can do this by going to the source view of that item like this:Now if you want to increase the size of the source box you shouldn’t try to drag the right corner but just click it once and resize the box:Now you can see more of the source view like this:Once you know it it is so easy !Reference: Developing with WSO2 from our JCG partner Pascal Alma at the The Pragmatic Integrator blog....
agile-logo

The New Agile–The beginning

Frequent readers of this blog know I’m a big proponent of agile, with a small ”a”. I tend to scoff at new big ideas that seem to jump out of a marketing committee. Truth is, sometimes these are good ideas. Sometimes they are first versions of trials and errors. When we look at them at the current stage they may seem ridiculous, but if someone get exposed to them, and continues the push, we may get something wonderful. Much like quality, we get the real judgment after the fact, sometimes years later. Therefore, I’ve decided to put some of my cynicism aside, and try to look at things in the agile world that have happened since the agile manifesto – critically. The collection of the posts called “The New Agile” stood behind the making of my “The New Agile” presentation. I presented different topics in the agile world. Some of the information will be accompanied with my observations, and some will be just an encouragement to you to study more if they interest you. There will be recommendation, but especially around what interests me. It may not be the same for everyone. Before we start though, we need to look back at how we got here. And like many examples,  I begin by going back to the agile manifesto. “We are uncovering better ways of developing software, by doing it and helping others do it”.17 smart people who went skiing in Snowbird, Utah in 2001 applied science to software development in one sentence. The idea is that we don’t know everything, but we continue to try, learn, fail and try again. The idea is that if more people do it, we’ll increase our learning. And the idea that software development is not a single process, but different paths, some we didn’t even discover yet. Great ideas. Those are still true today, and therefore what we’ve already learned may not be the best way to develop software. It may seem, for example, that after almost 20 years of TDD, we couldn’t find anything better, and therefore it is a best practice. But we may find something better tomorrow. Those things didn’t start at 2001. This was when they just go a name. If we go back in history, we can see different people learning and applying lean principles, before they got their name. Take W. Edwards Deming, for example. After WWII, he went to Japan and formalized the basis of agile: The  PDCA cycle: Plan, do, check and adjust. It’s the basis of continuous learning and improvement. Deming put in place the lean foundation that Toyota would base their manufacturing processes on. It may come as a shock to you after years of saying “developing software is not like building bridges” that it really is, if you’re building the bridges correctly. Ideas like a line-worker that can stop the line, is the origin of continuous integration. Don’t believe me? In a working CI, what happens when the build breaks? Fixing it is the biggest priority. And it’s not a managerial decree. It’s a culture where the developers stop what they are doing and getting the build on track again, and their decisions is supported by management. Just like when a line worker stops the production line. Quality is first and the culture supports it. So we had those ideas floating around, but because until the 90s we didn’t have an actual internet, ideas were not exchanged in the rate they do today. Communication was still limited. So when people like Ken Schwaber and Jeff Sutherland started working on scrum, Kent Beck and Ron Jeffries on eXtreme Programming, Alistair Cockborn on Crystal, they were “just” working with teams. The different names came to support the consultants marketing efforts, but at its base – it was “just” working with teams. The way they exchanged ideas was the old way: papers, books and conferences. You should remember that there weren’t many conferences about those things then. There were mostly programmer conferences  (nobody dared thinking about conferences for testers then). There were business management conferences, but they weren’t really interested with what the developers were playing with. And then the internet happened. Much like with literacy and then print, the internet gave knowledge ways to explode and reach exponentially larger audience. Finally ideas “scaled’. They were compared to each other, discussed, confronted, and applied. Some successfully, some poorly, some deviously. A few years after the agile manifesto, the business world was ready to stop and hear about some development methodology with a name that came from rugby. In most cases, we’d say the rest is history. But this was only the beginning.Reference: The New Agile–The beginning from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
akka-logo

ReactiveMongo with Akka, Scala and websockets

I was looking for a simple websocket server for one of my projects to test some stuff with reactive mongo. When looking around, though, I couldn’t really find a simple basic implementation without including a complete framework. Finally I stumbled upon one of Typesage activtor projects: http://typesafe.com/activator/template/akka-spray-websocket. Even though the name implies that spray is required, it actually uses websocket stuff from here: https://github.com/TooTallNate/Java-WebSocket, which provides a very simple to use basic websocket implementation. So in this article I’ll show you how you can setup a very simple websocket server (without requiring additional frameworks), together with Akka and ReactiveMongo. The following screenshots shows what we’re aiming for:In this screenshot you can see a simple websocket client that talks to our server. Our server has the following functionality:Anything the client sends is echo’d back. Any input added to a specific (capped) collection in mongoDB is automatically pushed towards all the listeners.You can cut and paste all the code from this article, but it is probably easier to just get the code from git. You can find it in github here: https://github.com/josdirksen/smartjava/tree/master/ws-akka Getting started The first thing we need to do is setup our workspace, so lets start by looking at the sbt configuration: organization := "org.smartjava"   version := "0.1"   scalaVersion := "2.11.2"   scalacOptions := Seq("-unchecked", "-deprecation", "-encoding", "utf8")   libraryDependencies ++= { val akkaV = "2.3.6" Seq( "com.typesafe.akka" %% "akka-actor" % akkaV, "org.java-websocket" % "Java-WebSocket" % "1.3.1-SNAPSHOT", "org.reactivemongo" %% "reactivemongo" % "0.10.5.0.akka23" ) }   resolvers ++= Seq("Code Envy" at "http://codenvycorp.com/repository/" ,"Typesafe" at "http://repo.typesafe.com/typesafe/releases/") Nothing special here, we just specify our dependencies and add some resolvers so that sbt knows where to retrieve the dependencies from. Before we look at the code lets first look at the directory structure and the file of our project: ├── build.sbt └── src    └── main       ├── resources       │   ├── application.conf       │   └── log4j2.xml       └── scala       ├── Boot.scala       ├── DB.scala       ├── WSActor.scala       └── WSServer.scala In the src/main/resources directory we store our configuration files and in src/main/scala we store all our scala files. Let start by looking at the configuration files. For this project we use two: The Application.conf file contains our project’s configuration and looks like this: akka { loglevel = "DEBUG" }   mongo { db = "scala" collection = "rmongo" location = "localhost" }   ws-server { port = 9999 } As you can see we just define the log level, how to use mongo and on which port we want our websocket server to listen. And we also need a log4j2.xml file since the reactivemongo library uses that one for logging: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="INFO"> <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/> </Console> </Appenders> <Loggers> <Root level="INFO"> <AppenderRef ref="Console"/> </Root> </Loggers> </Configuration> So, with the boring stuff out of the way lets look at the scala files. Starting the websocket server and registering the paths The Boot.scala file looks like this: package org.smartjava   import akka.actor.{Props, ActorSystem}   /** * This class launches the system. */ object Boot extends App { // create the actor system implicit lazy val system = ActorSystem("ws-system") // setup the mongoreactive connection implicit lazy val db = new DB(Configuration.location, Configuration.dbname);   // we'll use a simple actor which echo's everything it finds back to the client. val echo = system.actorOf(EchoActor.props(db, Configuration.collection), "echo")   // define the websocket routing and start a websocket listener private val wsServer = new WSServer(Configuration.port) wsServer.forResource("/echo", Some(echo)) wsServer.start   // make sure the actor system and the websocket server are shutdown when the client is // shutdown sys.addShutdownHook({system.shutdown;wsServer.stop}) }   // load configuration from external file object Configuration { import com.typesafe.config.ConfigFactory   private val config = ConfigFactory.load config.checkValid(ConfigFactory.defaultReference)   val port = config.getInt("ws-server.port") val dbname = config.getString("mongo.db") val collection = config.getString("mongo.collection") val location = config.getString("mongo.location") } In this source file we see two objects. The Configuration object allows us to easily access the configuration elements from the application.conf file and the Boot object will start our server. The comments in the code should pretty much explain what is happening, but let me point out the main things:We create an Akka actor system and a connection to our mongoDB instance. We define an actor which we can register to a specific websocket path. Then we create and start the websocketserver and register a path to the actor we just created. Finally we register a shutdown hook, to clean everything up.And that’s it. Now lets look at the interesting part of the code. Next up is the WSServer.scala file. Setting up a websocket server In the WSServer.scala file we define the websocket server. package org.smartjava   import akka.actor.{ActorSystem, ActorRef} import java.net.InetSocketAddress import org.java_websocket.WebSocket import org.java_websocket.framing.CloseFrame import org.java_websocket.handshake.ClientHandshake import org.java_websocket.server.WebSocketServer import scala.collection.mutable.Map import akka.event.Logging   /** * The WSserver companion objects defines a number of distinct messages sendable by this component */ object WSServer { sealed trait WSMessage case class Message(ws : WebSocket, msg : String) extends WSMessage case class Open(ws : WebSocket, hs : ClientHandshake) extends WSMessage case class Close(ws : WebSocket, code : Int, reason : String, external : Boolean) extends WSMessage case class Error(ws : WebSocket, ex : Exception) extends WSMessage }   /** * Create a websocket server that listens on a specific address. * * @param port */ class WSServer(val port : Int)(implicit system : ActorSystem, db: DB ) extends WebSocketServer(new InetSocketAddress(port)) {   // maps the path to a specific actor. private val reactors = Map[String, ActorRef]() // setup some logging based on the implicit passed in actorsystem private val log = Logging.getLogger(system, this);   // Call this function to bind an actor to a specific path. All incoming // connections to a specific path will be routed to that specific actor. final def forResource(descriptor : String, reactor : Option[ActorRef]) { log.debug("Registring actor:" + reactor + " to " + descriptor); reactor match { case Some(actor) => reactors += ((descriptor, actor)) case None => reactors -= descriptor } }   // onMessage is called when a websocket message is recieved. // in this method we check whether we can find a listening // actor and forward the call to that. final override def onMessage(ws : WebSocket, msg : String) {   if (null != ws) { reactors.get(ws.getResourceDescriptor) match { case Some(actor) => actor ! WSServer.Message(ws, msg) case None => ws.close(CloseFrame.REFUSE) } } }   final override def onOpen(ws : WebSocket, hs : ClientHandshake) { log.debug("OnOpen called {} :: {}", ws, hs); if (null != ws) { reactors.get(ws.getResourceDescriptor) match { case Some(actor) => actor ! WSServer.Open(ws, hs) case None => ws.close(CloseFrame.REFUSE) } } }   final override def onClose(ws : WebSocket, code : Int, reason : String, external : Boolean) { log.debug("Close called {} :: {} :: {} :: {}", ws, code, reason, external); if (null != ws) { reactors.get(ws.getResourceDescriptor) match { case Some(actor) => actor ! WSServer.Close(ws, code, reason, external) case None => ws.close(CloseFrame.REFUSE) } } } final override def onError(ws : WebSocket, ex : Exception) { log.debug("onError called {} :: {}", ws, ex); if (null != ws) { reactors.get(ws.getResourceDescriptor) match { case Some(actor) => actor ! WSServer.Error(ws, ex) case None => ws.close(CloseFrame.REFUSE) } } } } A large source file, but not difficult to understand. Let me explain the core concepts:We first define a number of messages as case classes. These are the messages that we sent to our actors. They reflect the messages our websocket server can receive from a client. The WSServer itself extends from the WebSocketServer provided by the org.java_websocket library. The WSServer defines one additional function called forResource. With this function we define which actor to call when we receive a message on our websocket server. and finally we override the different on* methods which are called when a specific event happens on to our websocket server.Now lets look at the echo functionality The akka echo actor The echo actor has two roles in this scenario. First it provides the functionality to respond to incoming messages by responding with the same message. Besides that it also creates a child actor (named ListenActor) that handles the documents received from mongoDB. object EchoActor {   // Messages send specifically by this actor to another instance of this actor. sealed trait EchoMessage   case class Unregister(ws : WebSocket) extends EchoMessage case class Listen() extends EchoMessage; case class StopListening() extends EchoMessage   def props(db: DB): Props = Props(new EchoActor(db)) }   /** * Actor that handles the websocket request */ class EchoActor(db: DB) extends Actor with ActorLogging { import EchoActor._   val clients = mutable.ListBuffer[WebSocket]() val socketActorMapping = mutable.Map[WebSocket, ActorRef]()   override def receive = {   // receive the open request case Open(ws, hs) => { log.debug("Received open request. Start listening for ", ws) clients += ws   // create the child actor that handles the db listening val targetActor = context.actorOf(ListenActor.props(ws, db));   socketActorMapping(ws) = targetActor; targetActor ! Listen }   // recieve the close request case Close(ws, code, reason, ext) => { log.debug("Received close request. Unregisting actor for url {}", ws.getResourceDescriptor)   // send a message to self to unregister self ! Unregister(ws) socketActorMapping(ws) ! StopListening socketActorMapping remove ws; }   // recieves an error message case Error(ws, ex) => self ! Unregister(ws)   // receives a text message case Message(ws, msg) => { log.debug("url {} received msg '{}'", ws.getResourceDescriptor, msg) ws.send("You send:" + msg); }   // unregister the websocket listener case Unregister(ws) => { if (null != ws) { log.debug("unregister monitor") clients -= ws } } } } The code of this actor pretty much should explain itself. With this actor and the code so far we’ve got a simple websocket server that uses an actor to handle messages. Before we look at the ListenActor, which is started from the “Open” message received by the EchoHandler, lets quickly look at how we connect to mongoDB from our DB object: package org.smartjava;   import play.api.libs.iteratee.{Concurrent, Enumeratee, Iteratee} import reactivemongo.api.collections.default.BSONCollection import reactivemongo.api._ import reactivemongo.bson.BSONDocument import scala.concurrent.ExecutionContext.Implicits.global   /** * Contains DB related functions. */ class DB(location:String, dbname:String) {   // get connection to the database val db: DefaultDB = createConnection(location, dbname) // create a enumerator that we use to broadcast received documents val (bcEnumerator, channel) = Concurrent.broadcast[BSONDocument] // assign the channel to the mongodb cursor enumerator val iteratee = createCursor(getCollection(Configuration.collection)) .enumerate() .apply(Iteratee .foreach({doc: BSONDocument => channel.push(doc)}));   /** * Return a simple collection */ private def getCollection(collection: String): BSONCollection = { db(collection) }   /** * Create the connection */ private def createConnection(location: String, dbname: String) : DefaultDB = { // needed to connect to mongoDB. import scala.concurrent.ExecutionContext   // gets an instance of the driver // (creates an actor system) val driver = new MongoDriver val connection = driver.connection(List(location))   // Gets a reference to the database connection(dbname) }   /** * Create the cursor */ private def createCursor(collection: BSONCollection): Cursor[BSONDocument] = { import reactivemongo.api._ import reactivemongo.bson._ import scala.concurrent.Future   import scala.concurrent.ExecutionContext.Implicits.global   val query = BSONDocument( "currentDate" -> BSONDocument( "$gte" -> BSONDateTime(System.currentTimeMillis()) ));   // we enumerate over a capped collection val cursor = collection.find(query) .options(QueryOpts().tailable.awaitData) .cursor[BSONDocument]   return cursor }   /** * Simple function that registers a callback and a predicate on the * broadcasting enumerator */ def listenToCollection(f: BSONDocument => Unit, p: BSONDocument => Boolean ) = {   val it = Iteratee.foreach(f) val itTransformed = Enumeratee.takeWhile[BSONDocument](p).transform(it); bcEnumerator.apply(itTransformed); } } Most of this code is fairly standard, but I’d like to point a couple of things out. At the beginning of this class we set up an iteratee like this: val db: DefaultDB = createConnection(location, dbname) val (bcEnumerator, channel) = Concurrent.broadcast[BSONDocument] val iteratee = createCursor(getCollection(Configuration.collection)) .enumerate() .apply(Iteratee .foreach({doc: BSONDocument => channel.push(doc)})); What we do here is that we first create a broadcast enumerator using the Concurrent.broadcast function. This enumerator can push elements provided by the channel to multiple consumers (iteratees). Next we create an iteratee on the enumerator provided by our ReactiveMongo cursor, where we use the just created channel to pass the documents to any iteratee that is connected to the bcEnumerator. We connect iteratees to the bcEnumerator in the listenToCollection function: def listenToCollection(f: BSONDocument => Unit, p: BSONDocument => Boolean ) = {   val it = Iteratee.foreach(f) val itTransformed = Enumeratee.takeWhile[BSONDocument](p).transform(it); bcEnumerator.apply(itTransformed); } In this function we pass in a function and a predicate. The function is executed whenever a document is added to mongo and the predicate is used to determine when to stop sending messages to the iteratee. The only missing part is the ListenActor ListenActor which responds to messages from Mongo The following code shows the actor responsible for responding to messages from mongoDB. When it receives a Listen message it registers itself using the listenToCollection function. Whenever a message is passed in from mongo it sends a message to itself, to further propogate it to the websocket. object ListenActor { case class ReceiveUpdate(msg: String); def props(ws: WebSocket, db: DB): Props = Props(new ListenActor(ws, db)) } class ListenActor(ws: WebSocket, db: DB) extends Actor with ActorLogging {   var predicateResult = true;   override def receive = { case Listen => {   log.info("{} , {} , {}", ws, db)   // function to call when we receive a message from the reactive mongo // we pass this to the DB cursor val func = ( doc: BSONDocument) => { self ! ReceiveUpdate(BSONDocument.pretty(doc)); }   // the predicate that determines how long we want to retrieve stuff // we do this while the predicateResult is true. val predicate = (d: BSONDocument) => {predicateResult} :Boolean Some(db.listenToCollection(func, predicate)) }   // when we recieve an update we just send it over the websocket case ReceiveUpdate(msg) => { ws.send(msg); }   case StopListening => { predicateResult = false;   // and kill ourselves self ! PoisonPill } } } Now that we’ve done all that, we can run this example. On startup you’ll see something like this: [DEBUG] [11/22/2014 15:14:33.856] [main] [EventStream(akka://ws-system)] logger log1-Logging$DefaultLogger started [DEBUG] [11/22/2014 15:14:33.857] [main] [EventStream(akka://ws-system)] Default Loggers started [DEBUG] [11/22/2014 15:14:35.104] [main] [WSServer(akka://ws-system)] Registring actor:Some(Actor[akka://ws-system/user/echo#1509664759]) to /echo 15:14:35.211 [reactivemongo-akka.actor.default-dispatcher-5] INFO reactivemongo.core.actors.MongoDBSystem - The node set is now available 15:14:35.214 [reactivemongo-akka.actor.default-dispatcher-5] INFO reactivemongo.core.actors.MongoDBSystem - The primary is now available Next when we connect a websocket we see the following:[DEBUG] [11/22/2014 15:15:18.957] [WebSocketWorker-32] [WSServer(akka://ws-system)] OnOpen called org.java_websocket.WebSocketImpl@3161f479 :: org.java_websocket.handshake.HandshakeImpl1Client@6d9a6e19 [DEBUG] [11/22/2014 15:15:18.965] [ws-system-akka.actor.default-dispatcher-2] [akka://ws-system/user/echo] Received open request. Start listening for WARNING arguments left: 1 [INFO] [11/22/2014 15:15:18.973] [ws-system-akka.actor.default-dispatcher-5] [akka://ws-system/user/echo/$a] org.java_websocket.WebSocketImpl@3161f479 , org.smartjava.DB@73fd64 Now lets insert a message into the mongo collection which we created with the following command: db.createCollection( "rmongo", { capped: true, size: 100000 } ) And lets insert an message: > db.rmongo.insert({"test": 1234567, "currentDate": new Date()}) WriteResult({ "nInserted" : 1 }) Which results in this in our websocket client:If you’re interested in the source files look at the following directory in GitHub: https://github.com/josdirksen/smartjava/tree/master/ws-akkaReference: ReactiveMongo with Akka, Scala and websockets from our JCG partner Jos Dirksen at the Smart Java blog....
devops-logo

Continuous Deployment: Introduction

This article is part of the Continuous Integration, Delivery and Deployment series. Continuous deployment is the ultimate culmination of software craftsmanship. Our skills need to be on such a high level that we have a confidence to continuously and automatically deploy our software to production. It is the natural evolution of continuous integration and delivery. We usually start with continuous integration with software being built and tests executed on every commit to VCS. As we get better with the process we proceed towards continuous delivery with process and, especially tests, so well done that we have the confidence that any version of the software that passed all validation can be deployed to production. We can release the software any time we want with a click of a button. Continuous deployment is accomplished when we get rid of that button and deploy every “green” build to production. This article will try to explore the goals of the final stage of continuous deployment, the deployment itself. It assumes that static analysis is being performed, unit, functional and stress tests are being run, test code coverage is high enough and that we have the confidence that our software is performing as expected after every commit. Goals of deployment process are:Run often Be automatic Be fast Provide zero-downtime Provide ability to rollbackDeploy Often Software industry is constantly under pressure to provide better quality and deliver faster thus resulting in shorter time to market. More often we deliver without sacrificing quality, sooner we rip benefits of features we delivered. The time spent between having some feature developed and deployed to production is the time wasted. Ideally, we should have new features delivered as soon as they are pushed to the repository. In the old days, we were used to waterfall model of delivering software. We’d plan everything, develop everything, test everything and, finally, deploy the project to production. It was not uncommon for that process to take months or even years. Putting months of work into production often resulted in deployment hell with many people working during weekend to put endless amount of new code into production servers. More often than not, things did not work as initially planned. Since then we learned that there are great benefits working on features instead of projects and delivering them as soon as they are done. Deliver often means deliver as soon as one feature is developed. Automate everything Having tasks automated allows us to remove part of “human error” as well as to do things faster. With BDD as a great way to define requirements and validate them as soon as features are pushed to repository and TDD as a way to drive development and continuously validate the code on unit level we can gain the confidence that code is performing as expected. Same automatism should apply to deployment. From building artifacts through provisioning environments until the actual deployment. Building artifacts is the process that is working well for quite some time. Whether it’s Gradle or Maven for Java (more info can be found in the Java Build Tools article), SBT for Scala, Gulp or Grunt for JavaScript or whichever other build tool or programming language you’re using, hardly anyone these days has a problem to build their software. Provisioning environments and deploying build artifacts is a process that still has a lot to be desired. Provisioning tools like Chef and Puppet did not fully deliver on their promise. They are either unreliable or too complex to use. Docker provided a fresh new approach (even though containers are not anything new). With it we can easily deploy containers running our software. Even though some type of provisioning of servers themselves might still be needed, usage of tools like Chef and Puppet is (or will be) greatly reduced if needed at all. With Docker, deploying fully functional software with everything it needs is as easy as executing a single command. Be fast The key to continuous deployment is speed. If the process from checking out the code through tests until deployment takes a lot of time, feedback we’re hoping to get is slow. The idea is to have feedback on commits as fast as possible so that, if there are problems, they can be fixed before we move into development of another feature. Context switching is expensive and going back and forth is a big waste of time. The need for speed does not apply only to tests run time but also to deployment. Sooner we deploy, sooner we’ll be able to start integration, regression and stress tests. If those are not run on production servers there will be another deployment after they are successful. On a first look difference between 5 and 10 minutes seems small but things add up. Checkout the code, run static analysis, execute unit, functional, integration and stress tests and finally deployment. Each of those together can sum to hours while, in my opinion, reasonable time for the whole process should not be more than 30 minutes. Run fast, fail fast, deploy quickly and rip benefits from having new features online. Zero-downtime Deploying continuously or at least often means that zero-downtime policy is a must. If we would deploy once a month or only several times a year, having the application unavailable during a short period of time might not be such a bad thing. However, if we’re deploying once a day or even many times a day, downtime, however short it might be, is unacceptable. There are different ways to reach zero-time with blue-green deployment being my favorite. Ability to rollback No matter how many automated tests we put in place, there is always the possibility that something will go wrong. Option to rollback to the previous version might save us a lot of trouble if something unexpected happens. Actually, we can build our software in the way that we can rollback to any previous version that passed all our tests. Rollback needs to be as easy as push of a button and it needs to be very fast in order to be able to restore expected functioning of the application with minimal negative effect. Downtime or incorrect behavior costs money and trust. Less time it lasts, less money we lose. Major obstacle in accomplishing rollback is often database. NoSQL tends to handle rollback with more grace. However, the reality is that relational DBs are not going to disappear any time soon and we need to get used to write delta script in a way that all changes are backward compatible. Summary Continuous deployment sounds to many as too risky or even impossible. Whether it’s risky depends on the architecture of software we’re building. As a general rule, splitting application into smaller independent elements helps a lot. Microservices is the way to go if possible. Risks aside, in many cases there is no business reason or willingness to adopt continuous delivery. Still, software can be continuously deployed to test servers thus becoming continuous delivery. No matter whether we are doing continuous deployment, delivery, integration or none of those, having automatic and fast deployment with zero downtime and ability to rollback provides great benefits. If for no other reason, because it frees us to do more productive and beneficial tasks. We should design and code our software and let machines do the rest for us. Next article will continue where we stopped and explore different strategies for deploying software.Reference: Continuous Deployment: Introduction from our JCG partner Viktor Farcic at the Technology conversations blog....
docker-logo

Docker Common Commands Cheatsheet

Docker CLI provides a comprehensive set of commands. Here is a quick cheat sheet of the commonly used commands:                    Purpose CommandBuild an image docker build –rm=true .Install an image docker pull ${IMAGE}List of installed images docker imagesList of installed images (detailed listing) docker images –no-truncRemove an image docker rmi ${IMAGE_ID}Remove all untagged images docker rmi $(docker images | grep “^” | awk “{print $3}”)Remove all images docker rm $(docker ps -aq)Run a container docker runList containers docker psStop a container docker stop ${CID}Find IP address of the container docker inspect –format ‘{{ .NetworkSettings.IPAddress }}’ ${CID}Attach to a container docker attach ${CID}Remove a container docker rm ${CID}Remove all containers docker rm $(docker ps -aq)  What other commands do you use commonly ?Reference: Docker Common Commands Cheatsheet from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
career-logo

Are recruiters generally all so bad?

I have a friend who’s a recruiter. Yes, that can actually happen. I mentioned to him that I’d been contacted by a really annoying recruiter, and he responded with “Can I ask you a serious question? Are ‘they’ generally all so bad? I’m struggling a little to understand why there is so much bad feeling generally, towards them.”. I had a think about this, and here’s the reply I sent him. It’s far from exhaustive, but I think it serves to start a discussion.         LiesEvery email or phonecall contains the phrase “this company is the leader in its field”. Generally…no, it’s not. “This is a fantastic opportunity for you”. This may occasionally be true, but when you see it in every email, it’s hard to shake the suspicion that it’s a template. “We pay a referral fee”. I once suggested someone to a recruiter, and the person got the job. Six months later, I repeatedly contacted the recruiter but didn’t get any response so I asked the company HR person to contact them. They finally got back in touch and said the person was actually suggested by someone else, who they named…and I knew that person. I got in touch with him and told him to collect the bounty, but they refused to pay it. If my LinkedIn profile says I’m not interested in a new position, don’t contact me about one and then say “I got your details from a friend of yours”.LazinessDon’t send me an email that’s completely different to my profile. If you’re contacting me, you either have my CV or you’ve seen my LinkedIn profile. Don’t end every email with “Please pass this onto all your friends”. I don’t ask you to write code, don’t ask me to do your job for free (also see Lies#3). Don’t ask for my CV in Word format; it’s obvious you’re going to cut and paste it into some shitty template so the recruiter’s client won’t see my direct contact details. I put effort into designing my CV, and I don’t want you to destroy it. If you want to connect with me on LinkedIn, don’t use the default message. I am not click-bait.PersonalityAs a recruiter, I assume you’re a shark-like parasite. If you want to work with me (by which I mean, cream a shitload of money off me), put some effort into building a relationship.CapabilitiesIf you don’t know anything about technology, don’t pretend you do. “My client is looking for someone with experience X, which I see you have” is fine; “…yeah, it’s really cutting edge stuff like [some dead technology]…” is bullshit. Don’t make promises you can ‘t keep.MarketIt’s a seller’s market. As an developer, I’m selling something. As a recruiter, you’re forcing yourself into the market as a broker. Brokers are plentiful; good developers are not. Convince me why I should use you to broker my skills; preferably, don’t use bullshit during this process.Finally, two thoughtsIf I say I’m not interested in a role, it means I’m not interested in a role. Wheedling, begging and threatening (happened to a friend of mine!) will not change anything, except I’ve now blacklisted your name and your company. Developers talk. If you’re an asshole, that piece of news will spread.Reference: Are recruiters generally all so bad? from our JCG partner Steve Chaloner at the Objectify blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close