Featured FREE Whitepapers

What's New Here?


Developing with WSO2

Since a few months I am back working with WSO2 products. In the upcoming posts I describe some of the (small) issues I ran into and how to solve them. The first thing I did when setting up my development environment was downloading the Developer Studio (64-bit version) on my Mac. After unzipping the downloaded zip file you see the Eclipse icon like this:    However after double clicking the icon Eclipse doesn’t start… Great way to start with the product ! Although this issue is described on their installation page this isn’t exactly the information I read before starting a Java IDE like Eclipse. But it turns out that you have to adjust the file permissions for the Studio to work on a Mac. Just open up a Terminal, browse to the installation directory and modify the permission like this:After fixing the permission you will see the familiair Eclipse applicationafter double clicking the icon:Another small issue you just have to know how to work with it is with the Management Console that comes with the WSO2 product. As an example here is the Management Console for the WSO2 ESB:Now when you want to edit the source of a certain item in the ESB you can do this by going to the source view of that item like this:Now if you want to increase the size of the source box you shouldn’t try to drag the right corner but just click it once and resize the box:Now you can see more of the source view like this:Once you know it it is so easy !Reference: Developing with WSO2 from our JCG partner Pascal Alma at the The Pragmatic Integrator blog....

The New Agile–The beginning

Frequent readers of this blog know I’m a big proponent of agile, with a small ”a”. I tend to scoff at new big ideas that seem to jump out of a marketing committee. Truth is, sometimes these are good ideas. Sometimes they are first versions of trials and errors. When we look at them at the current stage they may seem ridiculous, but if someone get exposed to them, and continues the push, we may get something wonderful. Much like quality, we get the real judgment after the fact, sometimes years later. Therefore, I’ve decided to put some of my cynicism aside, and try to look at things in the agile world that have happened since the agile manifesto – critically. The collection of the posts called “The New Agile” stood behind the making of my “The New Agile” presentation. I presented different topics in the agile world. Some of the information will be accompanied with my observations, and some will be just an encouragement to you to study more if they interest you. There will be recommendation, but especially around what interests me. It may not be the same for everyone. Before we start though, we need to look back at how we got here. And like many examples,  I begin by going back to the agile manifesto. “We are uncovering better ways of developing software, by doing it and helping others do it”.17 smart people who went skiing in Snowbird, Utah in 2001 applied science to software development in one sentence. The idea is that we don’t know everything, but we continue to try, learn, fail and try again. The idea is that if more people do it, we’ll increase our learning. And the idea that software development is not a single process, but different paths, some we didn’t even discover yet. Great ideas. Those are still true today, and therefore what we’ve already learned may not be the best way to develop software. It may seem, for example, that after almost 20 years of TDD, we couldn’t find anything better, and therefore it is a best practice. But we may find something better tomorrow. Those things didn’t start at 2001. This was when they just go a name. If we go back in history, we can see different people learning and applying lean principles, before they got their name. Take W. Edwards Deming, for example. After WWII, he went to Japan and formalized the basis of agile: The  PDCA cycle: Plan, do, check and adjust. It’s the basis of continuous learning and improvement. Deming put in place the lean foundation that Toyota would base their manufacturing processes on. It may come as a shock to you after years of saying “developing software is not like building bridges” that it really is, if you’re building the bridges correctly. Ideas like a line-worker that can stop the line, is the origin of continuous integration. Don’t believe me? In a working CI, what happens when the build breaks? Fixing it is the biggest priority. And it’s not a managerial decree. It’s a culture where the developers stop what they are doing and getting the build on track again, and their decisions is supported by management. Just like when a line worker stops the production line. Quality is first and the culture supports it. So we had those ideas floating around, but because until the 90s we didn’t have an actual internet, ideas were not exchanged in the rate they do today. Communication was still limited. So when people like Ken Schwaber and Jeff Sutherland started working on scrum, Kent Beck and Ron Jeffries on eXtreme Programming, Alistair Cockborn on Crystal, they were “just” working with teams. The different names came to support the consultants marketing efforts, but at its base – it was “just” working with teams. The way they exchanged ideas was the old way: papers, books and conferences. You should remember that there weren’t many conferences about those things then. There were mostly programmer conferences  (nobody dared thinking about conferences for testers then). There were business management conferences, but they weren’t really interested with what the developers were playing with. And then the internet happened. Much like with literacy and then print, the internet gave knowledge ways to explode and reach exponentially larger audience. Finally ideas “scaled’. They were compared to each other, discussed, confronted, and applied. Some successfully, some poorly, some deviously. A few years after the agile manifesto, the business world was ready to stop and hear about some development methodology with a name that came from rugby. In most cases, we’d say the rest is history. But this was only the beginning.Reference: The New Agile–The beginning from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

ReactiveMongo with Akka, Scala and websockets

I was looking for a simple websocket server for one of my projects to test some stuff with reactive mongo. When looking around, though, I couldn’t really find a simple basic implementation without including a complete framework. Finally I stumbled upon one of Typesage activtor projects: http://typesafe.com/activator/template/akka-spray-websocket. Even though the name implies that spray is required, it actually uses websocket stuff from here: https://github.com/TooTallNate/Java-WebSocket, which provides a very simple to use basic websocket implementation. So in this article I’ll show you how you can setup a very simple websocket server (without requiring additional frameworks), together with Akka and ReactiveMongo. The following screenshots shows what we’re aiming for:In this screenshot you can see a simple websocket client that talks to our server. Our server has the following functionality:Anything the client sends is echo’d back. Any input added to a specific (capped) collection in mongoDB is automatically pushed towards all the listeners.You can cut and paste all the code from this article, but it is probably easier to just get the code from git. You can find it in github here: https://github.com/josdirksen/smartjava/tree/master/ws-akka Getting started The first thing we need to do is setup our workspace, so lets start by looking at the sbt configuration: organization := "org.smartjava"   version := "0.1"   scalaVersion := "2.11.2"   scalacOptions := Seq("-unchecked", "-deprecation", "-encoding", "utf8")   libraryDependencies ++= { val akkaV = "2.3.6" Seq( "com.typesafe.akka" %% "akka-actor" % akkaV, "org.java-websocket" % "Java-WebSocket" % "1.3.1-SNAPSHOT", "org.reactivemongo" %% "reactivemongo" % "" ) }   resolvers ++= Seq("Code Envy" at "http://codenvycorp.com/repository/" ,"Typesafe" at "http://repo.typesafe.com/typesafe/releases/") Nothing special here, we just specify our dependencies and add some resolvers so that sbt knows where to retrieve the dependencies from. Before we look at the code lets first look at the directory structure and the file of our project: ├── build.sbt └── src    └── main       ├── resources       │   ├── application.conf       │   └── log4j2.xml       └── scala       ├── Boot.scala       ├── DB.scala       ├── WSActor.scala       └── WSServer.scala In the src/main/resources directory we store our configuration files and in src/main/scala we store all our scala files. Let start by looking at the configuration files. For this project we use two: The Application.conf file contains our project’s configuration and looks like this: akka { loglevel = "DEBUG" }   mongo { db = "scala" collection = "rmongo" location = "localhost" }   ws-server { port = 9999 } As you can see we just define the log level, how to use mongo and on which port we want our websocket server to listen. And we also need a log4j2.xml file since the reactivemongo library uses that one for logging: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="INFO"> <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/> </Console> </Appenders> <Loggers> <Root level="INFO"> <AppenderRef ref="Console"/> </Root> </Loggers> </Configuration> So, with the boring stuff out of the way lets look at the scala files. Starting the websocket server and registering the paths The Boot.scala file looks like this: package org.smartjava   import akka.actor.{Props, ActorSystem}   /** * This class launches the system. */ object Boot extends App { // create the actor system implicit lazy val system = ActorSystem("ws-system") // setup the mongoreactive connection implicit lazy val db = new DB(Configuration.location, Configuration.dbname);   // we'll use a simple actor which echo's everything it finds back to the client. val echo = system.actorOf(EchoActor.props(db, Configuration.collection), "echo")   // define the websocket routing and start a websocket listener private val wsServer = new WSServer(Configuration.port) wsServer.forResource("/echo", Some(echo)) wsServer.start   // make sure the actor system and the websocket server are shutdown when the client is // shutdown sys.addShutdownHook({system.shutdown;wsServer.stop}) }   // load configuration from external file object Configuration { import com.typesafe.config.ConfigFactory   private val config = ConfigFactory.load config.checkValid(ConfigFactory.defaultReference)   val port = config.getInt("ws-server.port") val dbname = config.getString("mongo.db") val collection = config.getString("mongo.collection") val location = config.getString("mongo.location") } In this source file we see two objects. The Configuration object allows us to easily access the configuration elements from the application.conf file and the Boot object will start our server. The comments in the code should pretty much explain what is happening, but let me point out the main things:We create an Akka actor system and a connection to our mongoDB instance. We define an actor which we can register to a specific websocket path. Then we create and start the websocketserver and register a path to the actor we just created. Finally we register a shutdown hook, to clean everything up.And that’s it. Now lets look at the interesting part of the code. Next up is the WSServer.scala file. Setting up a websocket server In the WSServer.scala file we define the websocket server. package org.smartjava   import akka.actor.{ActorSystem, ActorRef} import java.net.InetSocketAddress import org.java_websocket.WebSocket import org.java_websocket.framing.CloseFrame import org.java_websocket.handshake.ClientHandshake import org.java_websocket.server.WebSocketServer import scala.collection.mutable.Map import akka.event.Logging   /** * The WSserver companion objects defines a number of distinct messages sendable by this component */ object WSServer { sealed trait WSMessage case class Message(ws : WebSocket, msg : String) extends WSMessage case class Open(ws : WebSocket, hs : ClientHandshake) extends WSMessage case class Close(ws : WebSocket, code : Int, reason : String, external : Boolean) extends WSMessage case class Error(ws : WebSocket, ex : Exception) extends WSMessage }   /** * Create a websocket server that listens on a specific address. * * @param port */ class WSServer(val port : Int)(implicit system : ActorSystem, db: DB ) extends WebSocketServer(new InetSocketAddress(port)) {   // maps the path to a specific actor. private val reactors = Map[String, ActorRef]() // setup some logging based on the implicit passed in actorsystem private val log = Logging.getLogger(system, this);   // Call this function to bind an actor to a specific path. All incoming // connections to a specific path will be routed to that specific actor. final def forResource(descriptor : String, reactor : Option[ActorRef]) { log.debug("Registring actor:" + reactor + " to " + descriptor); reactor match { case Some(actor) => reactors += ((descriptor, actor)) case None => reactors -= descriptor } }   // onMessage is called when a websocket message is recieved. // in this method we check whether we can find a listening // actor and forward the call to that. final override def onMessage(ws : WebSocket, msg : String) {   if (null != ws) { reactors.get(ws.getResourceDescriptor) match { case Some(actor) => actor ! WSServer.Message(ws, msg) case None => ws.close(CloseFrame.REFUSE) } } }   final override def onOpen(ws : WebSocket, hs : ClientHandshake) { log.debug("OnOpen called {} :: {}", ws, hs); if (null != ws) { reactors.get(ws.getResourceDescriptor) match { case Some(actor) => actor ! WSServer.Open(ws, hs) case None => ws.close(CloseFrame.REFUSE) } } }   final override def onClose(ws : WebSocket, code : Int, reason : String, external : Boolean) { log.debug("Close called {} :: {} :: {} :: {}", ws, code, reason, external); if (null != ws) { reactors.get(ws.getResourceDescriptor) match { case Some(actor) => actor ! WSServer.Close(ws, code, reason, external) case None => ws.close(CloseFrame.REFUSE) } } } final override def onError(ws : WebSocket, ex : Exception) { log.debug("onError called {} :: {}", ws, ex); if (null != ws) { reactors.get(ws.getResourceDescriptor) match { case Some(actor) => actor ! WSServer.Error(ws, ex) case None => ws.close(CloseFrame.REFUSE) } } } } A large source file, but not difficult to understand. Let me explain the core concepts:We first define a number of messages as case classes. These are the messages that we sent to our actors. They reflect the messages our websocket server can receive from a client. The WSServer itself extends from the WebSocketServer provided by the org.java_websocket library. The WSServer defines one additional function called forResource. With this function we define which actor to call when we receive a message on our websocket server. and finally we override the different on* methods which are called when a specific event happens on to our websocket server.Now lets look at the echo functionality The akka echo actor The echo actor has two roles in this scenario. First it provides the functionality to respond to incoming messages by responding with the same message. Besides that it also creates a child actor (named ListenActor) that handles the documents received from mongoDB. object EchoActor {   // Messages send specifically by this actor to another instance of this actor. sealed trait EchoMessage   case class Unregister(ws : WebSocket) extends EchoMessage case class Listen() extends EchoMessage; case class StopListening() extends EchoMessage   def props(db: DB): Props = Props(new EchoActor(db)) }   /** * Actor that handles the websocket request */ class EchoActor(db: DB) extends Actor with ActorLogging { import EchoActor._   val clients = mutable.ListBuffer[WebSocket]() val socketActorMapping = mutable.Map[WebSocket, ActorRef]()   override def receive = {   // receive the open request case Open(ws, hs) => { log.debug("Received open request. Start listening for ", ws) clients += ws   // create the child actor that handles the db listening val targetActor = context.actorOf(ListenActor.props(ws, db));   socketActorMapping(ws) = targetActor; targetActor ! Listen }   // recieve the close request case Close(ws, code, reason, ext) => { log.debug("Received close request. Unregisting actor for url {}", ws.getResourceDescriptor)   // send a message to self to unregister self ! Unregister(ws) socketActorMapping(ws) ! StopListening socketActorMapping remove ws; }   // recieves an error message case Error(ws, ex) => self ! Unregister(ws)   // receives a text message case Message(ws, msg) => { log.debug("url {} received msg '{}'", ws.getResourceDescriptor, msg) ws.send("You send:" + msg); }   // unregister the websocket listener case Unregister(ws) => { if (null != ws) { log.debug("unregister monitor") clients -= ws } } } } The code of this actor pretty much should explain itself. With this actor and the code so far we’ve got a simple websocket server that uses an actor to handle messages. Before we look at the ListenActor, which is started from the “Open” message received by the EchoHandler, lets quickly look at how we connect to mongoDB from our DB object: package org.smartjava;   import play.api.libs.iteratee.{Concurrent, Enumeratee, Iteratee} import reactivemongo.api.collections.default.BSONCollection import reactivemongo.api._ import reactivemongo.bson.BSONDocument import scala.concurrent.ExecutionContext.Implicits.global   /** * Contains DB related functions. */ class DB(location:String, dbname:String) {   // get connection to the database val db: DefaultDB = createConnection(location, dbname) // create a enumerator that we use to broadcast received documents val (bcEnumerator, channel) = Concurrent.broadcast[BSONDocument] // assign the channel to the mongodb cursor enumerator val iteratee = createCursor(getCollection(Configuration.collection)) .enumerate() .apply(Iteratee .foreach({doc: BSONDocument => channel.push(doc)}));   /** * Return a simple collection */ private def getCollection(collection: String): BSONCollection = { db(collection) }   /** * Create the connection */ private def createConnection(location: String, dbname: String) : DefaultDB = { // needed to connect to mongoDB. import scala.concurrent.ExecutionContext   // gets an instance of the driver // (creates an actor system) val driver = new MongoDriver val connection = driver.connection(List(location))   // Gets a reference to the database connection(dbname) }   /** * Create the cursor */ private def createCursor(collection: BSONCollection): Cursor[BSONDocument] = { import reactivemongo.api._ import reactivemongo.bson._ import scala.concurrent.Future   import scala.concurrent.ExecutionContext.Implicits.global   val query = BSONDocument( "currentDate" -> BSONDocument( "$gte" -> BSONDateTime(System.currentTimeMillis()) ));   // we enumerate over a capped collection val cursor = collection.find(query) .options(QueryOpts().tailable.awaitData) .cursor[BSONDocument]   return cursor }   /** * Simple function that registers a callback and a predicate on the * broadcasting enumerator */ def listenToCollection(f: BSONDocument => Unit, p: BSONDocument => Boolean ) = {   val it = Iteratee.foreach(f) val itTransformed = Enumeratee.takeWhile[BSONDocument](p).transform(it); bcEnumerator.apply(itTransformed); } } Most of this code is fairly standard, but I’d like to point a couple of things out. At the beginning of this class we set up an iteratee like this: val db: DefaultDB = createConnection(location, dbname) val (bcEnumerator, channel) = Concurrent.broadcast[BSONDocument] val iteratee = createCursor(getCollection(Configuration.collection)) .enumerate() .apply(Iteratee .foreach({doc: BSONDocument => channel.push(doc)})); What we do here is that we first create a broadcast enumerator using the Concurrent.broadcast function. This enumerator can push elements provided by the channel to multiple consumers (iteratees). Next we create an iteratee on the enumerator provided by our ReactiveMongo cursor, where we use the just created channel to pass the documents to any iteratee that is connected to the bcEnumerator. We connect iteratees to the bcEnumerator in the listenToCollection function: def listenToCollection(f: BSONDocument => Unit, p: BSONDocument => Boolean ) = {   val it = Iteratee.foreach(f) val itTransformed = Enumeratee.takeWhile[BSONDocument](p).transform(it); bcEnumerator.apply(itTransformed); } In this function we pass in a function and a predicate. The function is executed whenever a document is added to mongo and the predicate is used to determine when to stop sending messages to the iteratee. The only missing part is the ListenActor ListenActor which responds to messages from Mongo The following code shows the actor responsible for responding to messages from mongoDB. When it receives a Listen message it registers itself using the listenToCollection function. Whenever a message is passed in from mongo it sends a message to itself, to further propogate it to the websocket. object ListenActor { case class ReceiveUpdate(msg: String); def props(ws: WebSocket, db: DB): Props = Props(new ListenActor(ws, db)) } class ListenActor(ws: WebSocket, db: DB) extends Actor with ActorLogging {   var predicateResult = true;   override def receive = { case Listen => {   log.info("{} , {} , {}", ws, db)   // function to call when we receive a message from the reactive mongo // we pass this to the DB cursor val func = ( doc: BSONDocument) => { self ! ReceiveUpdate(BSONDocument.pretty(doc)); }   // the predicate that determines how long we want to retrieve stuff // we do this while the predicateResult is true. val predicate = (d: BSONDocument) => {predicateResult} :Boolean Some(db.listenToCollection(func, predicate)) }   // when we recieve an update we just send it over the websocket case ReceiveUpdate(msg) => { ws.send(msg); }   case StopListening => { predicateResult = false;   // and kill ourselves self ! PoisonPill } } } Now that we’ve done all that, we can run this example. On startup you’ll see something like this: [DEBUG] [11/22/2014 15:14:33.856] [main] [EventStream(akka://ws-system)] logger log1-Logging$DefaultLogger started [DEBUG] [11/22/2014 15:14:33.857] [main] [EventStream(akka://ws-system)] Default Loggers started [DEBUG] [11/22/2014 15:14:35.104] [main] [WSServer(akka://ws-system)] Registring actor:Some(Actor[akka://ws-system/user/echo#1509664759]) to /echo 15:14:35.211 [reactivemongo-akka.actor.default-dispatcher-5] INFO reactivemongo.core.actors.MongoDBSystem - The node set is now available 15:14:35.214 [reactivemongo-akka.actor.default-dispatcher-5] INFO reactivemongo.core.actors.MongoDBSystem - The primary is now available Next when we connect a websocket we see the following:[DEBUG] [11/22/2014 15:15:18.957] [WebSocketWorker-32] [WSServer(akka://ws-system)] OnOpen called org.java_websocket.WebSocketImpl@3161f479 :: org.java_websocket.handshake.HandshakeImpl1Client@6d9a6e19 [DEBUG] [11/22/2014 15:15:18.965] [ws-system-akka.actor.default-dispatcher-2] [akka://ws-system/user/echo] Received open request. Start listening for WARNING arguments left: 1 [INFO] [11/22/2014 15:15:18.973] [ws-system-akka.actor.default-dispatcher-5] [akka://ws-system/user/echo/$a] org.java_websocket.WebSocketImpl@3161f479 , org.smartjava.DB@73fd64 Now lets insert a message into the mongo collection which we created with the following command: db.createCollection( "rmongo", { capped: true, size: 100000 } ) And lets insert an message: > db.rmongo.insert({"test": 1234567, "currentDate": new Date()}) WriteResult({ "nInserted" : 1 }) Which results in this in our websocket client:If you’re interested in the source files look at the following directory in GitHub: https://github.com/josdirksen/smartjava/tree/master/ws-akkaReference: ReactiveMongo with Akka, Scala and websockets from our JCG partner Jos Dirksen at the Smart Java blog....

Continuous Deployment: Introduction

This article is part of the Continuous Integration, Delivery and Deployment series. Continuous deployment is the ultimate culmination of software craftsmanship. Our skills need to be on such a high level that we have a confidence to continuously and automatically deploy our software to production. It is the natural evolution of continuous integration and delivery. We usually start with continuous integration with software being built and tests executed on every commit to VCS. As we get better with the process we proceed towards continuous delivery with process and, especially tests, so well done that we have the confidence that any version of the software that passed all validation can be deployed to production. We can release the software any time we want with a click of a button. Continuous deployment is accomplished when we get rid of that button and deploy every “green” build to production. This article will try to explore the goals of the final stage of continuous deployment, the deployment itself. It assumes that static analysis is being performed, unit, functional and stress tests are being run, test code coverage is high enough and that we have the confidence that our software is performing as expected after every commit. Goals of deployment process are:Run often Be automatic Be fast Provide zero-downtime Provide ability to rollbackDeploy Often Software industry is constantly under pressure to provide better quality and deliver faster thus resulting in shorter time to market. More often we deliver without sacrificing quality, sooner we rip benefits of features we delivered. The time spent between having some feature developed and deployed to production is the time wasted. Ideally, we should have new features delivered as soon as they are pushed to the repository. In the old days, we were used to waterfall model of delivering software. We’d plan everything, develop everything, test everything and, finally, deploy the project to production. It was not uncommon for that process to take months or even years. Putting months of work into production often resulted in deployment hell with many people working during weekend to put endless amount of new code into production servers. More often than not, things did not work as initially planned. Since then we learned that there are great benefits working on features instead of projects and delivering them as soon as they are done. Deliver often means deliver as soon as one feature is developed. Automate everything Having tasks automated allows us to remove part of “human error” as well as to do things faster. With BDD as a great way to define requirements and validate them as soon as features are pushed to repository and TDD as a way to drive development and continuously validate the code on unit level we can gain the confidence that code is performing as expected. Same automatism should apply to deployment. From building artifacts through provisioning environments until the actual deployment. Building artifacts is the process that is working well for quite some time. Whether it’s Gradle or Maven for Java (more info can be found in the Java Build Tools article), SBT for Scala, Gulp or Grunt for JavaScript or whichever other build tool or programming language you’re using, hardly anyone these days has a problem to build their software. Provisioning environments and deploying build artifacts is a process that still has a lot to be desired. Provisioning tools like Chef and Puppet did not fully deliver on their promise. They are either unreliable or too complex to use. Docker provided a fresh new approach (even though containers are not anything new). With it we can easily deploy containers running our software. Even though some type of provisioning of servers themselves might still be needed, usage of tools like Chef and Puppet is (or will be) greatly reduced if needed at all. With Docker, deploying fully functional software with everything it needs is as easy as executing a single command. Be fast The key to continuous deployment is speed. If the process from checking out the code through tests until deployment takes a lot of time, feedback we’re hoping to get is slow. The idea is to have feedback on commits as fast as possible so that, if there are problems, they can be fixed before we move into development of another feature. Context switching is expensive and going back and forth is a big waste of time. The need for speed does not apply only to tests run time but also to deployment. Sooner we deploy, sooner we’ll be able to start integration, regression and stress tests. If those are not run on production servers there will be another deployment after they are successful. On a first look difference between 5 and 10 minutes seems small but things add up. Checkout the code, run static analysis, execute unit, functional, integration and stress tests and finally deployment. Each of those together can sum to hours while, in my opinion, reasonable time for the whole process should not be more than 30 minutes. Run fast, fail fast, deploy quickly and rip benefits from having new features online. Zero-downtime Deploying continuously or at least often means that zero-downtime policy is a must. If we would deploy once a month or only several times a year, having the application unavailable during a short period of time might not be such a bad thing. However, if we’re deploying once a day or even many times a day, downtime, however short it might be, is unacceptable. There are different ways to reach zero-time with blue-green deployment being my favorite. Ability to rollback No matter how many automated tests we put in place, there is always the possibility that something will go wrong. Option to rollback to the previous version might save us a lot of trouble if something unexpected happens. Actually, we can build our software in the way that we can rollback to any previous version that passed all our tests. Rollback needs to be as easy as push of a button and it needs to be very fast in order to be able to restore expected functioning of the application with minimal negative effect. Downtime or incorrect behavior costs money and trust. Less time it lasts, less money we lose. Major obstacle in accomplishing rollback is often database. NoSQL tends to handle rollback with more grace. However, the reality is that relational DBs are not going to disappear any time soon and we need to get used to write delta script in a way that all changes are backward compatible. Summary Continuous deployment sounds to many as too risky or even impossible. Whether it’s risky depends on the architecture of software we’re building. As a general rule, splitting application into smaller independent elements helps a lot. Microservices is the way to go if possible. Risks aside, in many cases there is no business reason or willingness to adopt continuous delivery. Still, software can be continuously deployed to test servers thus becoming continuous delivery. No matter whether we are doing continuous deployment, delivery, integration or none of those, having automatic and fast deployment with zero downtime and ability to rollback provides great benefits. If for no other reason, because it frees us to do more productive and beneficial tasks. We should design and code our software and let machines do the rest for us. Next article will continue where we stopped and explore different strategies for deploying software.Reference: Continuous Deployment: Introduction from our JCG partner Viktor Farcic at the Technology conversations blog....

Docker Common Commands Cheatsheet

Docker CLI provides a comprehensive set of commands. Here is a quick cheat sheet of the commonly used commands:                    Purpose CommandBuild an image docker build –rm=true .Install an image docker pull ${IMAGE}List of installed images docker imagesList of installed images (detailed listing) docker images –no-truncRemove an image docker rmi ${IMAGE_ID}Remove all untagged images docker rmi $(docker images | grep “^” | awk “{print $3}”)Remove all images docker rm $(docker ps -aq)Run a container docker runList containers docker psStop a container docker stop ${CID}Find IP address of the container docker inspect –format ‘{{ .NetworkSettings.IPAddress }}’ ${CID}Attach to a container docker attach ${CID}Remove a container docker rm ${CID}Remove all containers docker rm $(docker ps -aq)  What other commands do you use commonly ?Reference: Docker Common Commands Cheatsheet from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....

Are recruiters generally all so bad?

I have a friend who’s a recruiter. Yes, that can actually happen. I mentioned to him that I’d been contacted by a really annoying recruiter, and he responded with “Can I ask you a serious question? Are ‘they’ generally all so bad? I’m struggling a little to understand why there is so much bad feeling generally, towards them.”. I had a think about this, and here’s the reply I sent him. It’s far from exhaustive, but I think it serves to start a discussion.         LiesEvery email or phonecall contains the phrase “this company is the leader in its field”. Generally…no, it’s not. “This is a fantastic opportunity for you”. This may occasionally be true, but when you see it in every email, it’s hard to shake the suspicion that it’s a template. “We pay a referral fee”. I once suggested someone to a recruiter, and the person got the job. Six months later, I repeatedly contacted the recruiter but didn’t get any response so I asked the company HR person to contact them. They finally got back in touch and said the person was actually suggested by someone else, who they named…and I knew that person. I got in touch with him and told him to collect the bounty, but they refused to pay it. If my LinkedIn profile says I’m not interested in a new position, don’t contact me about one and then say “I got your details from a friend of yours”.LazinessDon’t send me an email that’s completely different to my profile. If you’re contacting me, you either have my CV or you’ve seen my LinkedIn profile. Don’t end every email with “Please pass this onto all your friends”. I don’t ask you to write code, don’t ask me to do your job for free (also see Lies#3). Don’t ask for my CV in Word format; it’s obvious you’re going to cut and paste it into some shitty template so the recruiter’s client won’t see my direct contact details. I put effort into designing my CV, and I don’t want you to destroy it. If you want to connect with me on LinkedIn, don’t use the default message. I am not click-bait.PersonalityAs a recruiter, I assume you’re a shark-like parasite. If you want to work with me (by which I mean, cream a shitload of money off me), put some effort into building a relationship.CapabilitiesIf you don’t know anything about technology, don’t pretend you do. “My client is looking for someone with experience X, which I see you have” is fine; “…yeah, it’s really cutting edge stuff like [some dead technology]…” is bullshit. Don’t make promises you can ‘t keep.MarketIt’s a seller’s market. As an developer, I’m selling something. As a recruiter, you’re forcing yourself into the market as a broker. Brokers are plentiful; good developers are not. Convince me why I should use you to broker my skills; preferably, don’t use bullshit during this process.Finally, two thoughtsIf I say I’m not interested in a role, it means I’m not interested in a role. Wheedling, begging and threatening (happened to a friend of mine!) will not change anything, except I’ve now blacklisted your name and your company. Developers talk. If you’re an asshole, that piece of news will spread.Reference: Are recruiters generally all so bad? from our JCG partner Steve Chaloner at the Objectify blog....

Avoid unwanted component scanning of Spring Configuration

I came through interesting problem on Stack Overflow. Brett Ryan had problem that Spring Security configuration was initialized twice. When I was looking into his code I spot the problem. Let me show show the code. He has pretty standard Spring application (not using Spring Boot). Uses more modern Java servlet Configuration based on Spring’s AbstractAnnotationConfigDispatcherServletInitializer.         import org.springframework.web.servlet.support.AbstractAnnotationConfigDispatcherServletInitializer;public class AppInitializer extends AbstractAnnotationConfigDispatcherServletInitializer {@Override protected Class<?>[] getRootConfigClasses() { return new Class[]{SecurityConfig.class}; }@Override protected Class<?>[] getServletConfigClasses() { return new Class[]{WebConfig.class}; }@Override protected String[] getServletMappings() { return new String[]{"/"}; }} As you can see, there are two configuration classes:SecurityConfig – holds Spring Security configuration WebConfig – main Spring’s IoC container configurationpackage net.lkrnac.blog.dontscanconfigurations;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; import org.springframework.security.config.annotation.web.servlet.configuration.EnableWebMvcSecurity;@Configuration @EnableWebMvcSecurity public class SecurityConfig extends WebSecurityConfigurerAdapter {@Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { System.out.println("Spring Security init..."); auth .inMemoryAuthentication() .withUser("user").password("password").roles("USER"); }} import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.web.servlet.config.annotation.EnableWebMvc; import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter;@Configuration @EnableWebMvc @ComponentScan(basePackages = "net.lkrnac.blog.dontscanconfigurations") public class WebConfig extends WebMvcConfigurerAdapter {} Pay attention to the component scanning in WebConfig. It is scanning package where all three classes are located. When you run this on servlet container, text “Spring Security init…” is written to console twice. It mean mean SecurityConfig configuration is loaded twice. It was loaded:During initialization of servlet container in method AppInitializer.getRootConfigClasses() By component scan in class WebConfigWhy? I found this explanation in Spring’s documentation: Remember that @Configuration classes are meta-annotated with @Component, so they are candidates for component-scanning! So this is feature of Spring and therefore we want to avoid component scanning of Spring @Configuration used by Servlet configuration. Brett Ryan independently found this problem and showed his solution in mentioned Stack Overflow question: @ComponentScan(basePackages = "com.acme.app", excludeFilters = { @Filter(type = ASSIGNABLE_TYPE, value = { WebConfig.class, SecurityConfig.class }) }) I don’t like this solution. Annotation is too verbose for me. Also some developer can create new @Configuration class and forget to include it into this filter. I would rather specify special package that would be excluded from Spring’s component scanning.I created sample project on Github so that you can play with it.Reference: Avoid unwanted component scanning of Spring Configuration from our JCG partner Lubos Krnac at the Lubos Krnac Java blog blog....

Black Box Testing of Spring Boot Microservice is so easy

When I needed to do prototyping, proof of concept or play with some new technology in free time, starting new project was always a little annoying barrier with Maven. Have to say that setting up Maven project is not hard and you can use Maven Archetypes. But Archetypes are often out of date. Who wants to play with old technologies? So I always end up wiring in dependencies I wanted to play with. Not very productive spent time. But than Spring Boot came to my way. I fell in love. In last few months I created at least 50 small playground projects, prototypes with Spring Boot. Also incorporated it at work. It’s just perfect for prototyping, learning, microservices, web, batch, enterprise, message flow or command line applications. You have to be dinosaur or be blind not to evaluate Spring Boot for your next Spring project. And when you finish evaluate it, you will go for it. I promise. I feel a need to highlight how easy is Black Box Testing of Spring Boot microservice. Black Box Testing refers to testing without any poking with application artifact. Such testing can be called also integration testing. You can also perform performance or stress testing way I am going to demonstrate. Spring Boot Microservice is usually web application with embedded Tomcat. So it is executed as JAR from command line. There is possibility to convert Spring Boot project into WAR artifact, that can be hosted on shared Servlet container. But we don’t want that now. It’s better when microservice has its own little embedded container. I used existing Spring’s REST service guide as testing target. Focus is mostly on testing project, so it is handy to use this “Hello World” REST application as example. I expect these two common tools are set up and installed on your machine:Maven 3 GitSo we’ll need to download source code and install JAR artifact into our local repository. I am going to use command line to download and install the microservice. Let’s go to some directory where we download source code. Use these commands: git clone git@github.com:spring-guides/gs-rest-service.git cd gs-rest-service/complete mvn clean install If everything went OK, Spring Boot microservice JAR artifact is now installed in our local Maven repository. In serious Java development, it would be rather installed into shared repository (e.g. Artifactory, Nexus,… ). When our microservice is installed, we can focus on testing project. It is also Maven and Spring Boot based. Black box testing will be achieved by downloading the artifact from Maven repository (doesn’t matter if it is local or remote). Maven-dependency-plugin can help us this way: <plugin>     <groupId>org.apache.maven.plugins</groupId>     <artifactId>maven-dependency-plugin</artifactId>     <executions>         <execution>             <id>copy-dependencies</id>             <phase>compile</phase>             <goals>                 <goal>copy-dependencies</goal>             </goals>             <configuration>                 <includeArtifactIds>gs-rest-service</includeArtifactIds>                 <stripVersion>true</stripVersion>             </configuration>         </execution>     </executions> </plugin> It downloads microservice artifact into target/dependency directory by default. As you can see, it’s hooked to compile phase of Maven lifecycle, so that downloaded artifact is available during test phase. Artifact version is stripped from version information. We use latest version. It makes usage of JAR artifact easier during testing. Readers skilled with Maven may notice missing plugin version. Spring Boot driven project is inherited from parent Maven project called spring-boot-starter-parent. It contains versions of main Maven plugins. This is one of the Spring Boot’s opinionated aspects. I like it, because it provides stable dependencies matrix. You can change the version if you need. When we have artifact in our file system, we can start testing. We need to be able to execute JAR file from command line. I used standard Java ProcessBuilder this way: public class ProcessExecutor { public Process execute(String jarName) throws IOException { Process p = null; ProcessBuilder pb = new ProcessBuilder("java", "-jar", jarName); pb.directory(new File("target/dependency")); File log = new File("log"); pb.redirectErrorStream(true); pb.redirectOutput(Redirect.appendTo(log)); p = pb.start(); return p; } } This class executes given process JAR based on given file name. Location is hard-coded to  target/dependency directory, where maven-dependency-plugin located our artifact. Standard and error outputs are redirected to file. Next class needed for testing is DTO (Data  transfer object). It is simple POJO that will be used for deserialization from JSON. I use Lombok project to reduce boilerplate code needed for getters, setters, hashCode and equals. @Data @AllArgsConstructor @NoArgsConstructor public class Greeting { private long id; private String content; } Test itself looks like this: public class BlackBoxTest { private static final String RESOURCE_URL = "http://localhost:8080/greeting";@Test public void contextLoads() throws InterruptedException, IOException { Process process = null; Greeting actualGreeting = null; try { process = new ProcessExecutor().execute("gs-rest-service.jar");RestTemplate restTemplate = new RestTemplate(); waitForStart(restTemplate);actualGreeting = restTemplate.getForObject(RESOURCE_URL, Greeting.class); } finally { process.destroyForcibly(); } Assert.assertEquals(new Greeting(2L, "Hello, World!"), actualGreeting); }private void waitForStart(RestTemplate restTemplate) { while (true) { try { Thread.sleep(500); restTemplate.getForObject(RESOURCE_URL, String.class); return; } catch (Throwable throwable) { // ignoring errors } } } } It executes Spring Boot microservice process first and wait unit it starts. To verify if microservice is started, it sends HTTP request to URL where it’s expected. The service is ready for testing after first successful response. Microservice should send simple greeting JSON response for HTTP GET request. Deserialization from JSON into our Greeting DTO is verified at the end of the test.Source code is shared on Github.Reference: Black Box Testing of Spring Boot Microservice is so easy from our JCG partner Lubos Krnac at the Lubos Krnac Java blog blog....

Converting between Completablefuture and Observable

CompletableFuture<T> from Java 8 is an advanced abstraction over a promise that value of type T will be available in the future. Observable<T> is quite similar, but it promises arbitrary number of items in the future, from 0 to infinity. These two representations of asynchronous results are quite similar to the point where Observable with just one item can be used instead of CompletableFuture and vice-versa. On the other hand CompletableFuture is more specialized and because it’s now part of JDK, should become prevalent quite soon. Let’s celebrate RxJava 1.0 release with a short article showing how to convert between the two, without loosing asynchronous and event-driven nature of them.     From CompletableFuture<T> to Observable<T> CompletableFuture represents one value in the future, so turning it into Observable is rather simple. When Futurecompletes with some value, Observable will emit that value as well immediately and close stream: class FuturesTest extends Specification { public static final String MSG = "Don't panic" def 'should convert completed Future to completed Observable'() { given: CompletableFuture<String> future = CompletableFuture.completedFuture("Abc") when: Observable<String> observable = Futures.toObservable(future) then: observable.toBlocking().toIterable().toList() == ["Abc"] } def 'should convert failed Future into Observable with failure'() { given: CompletableFuture<String> future = failedFuture(new IllegalStateException(MSG)) when: Observable<String> observable = Futures.toObservable(future) then: observable .onErrorReturn({ th -> th.message } as Func1) .toBlocking() .toIterable() .toList() == [MSG] } CompletableFuture failedFuture(Exception error) { CompletableFuture future = new CompletableFuture() future.completeExceptionally(error) return future } }First test of not-yet-implemented Futures.toObservable() converts Future into Observable and makes sure value is propagated correctly. Second test created failed Future, replaces failure with exception’s message and makes sure exception was propagated. The implementation is much shorter: public static <T> Observable<T> toObservable(CompletableFuture<T> future) { return Observable.create(subscriber -> future.whenComplete((result, error) -> { if (error != null) { subscriber.onError(error); } else { subscriber.onNext(result); subscriber.onCompleted(); } })); }NB: Observable.fromFuture() exists, however we want to take full advantage of ComplatableFuture‘s asynchronous operators. From Observable<T> to CompletableFuture<List<T>> There are actually two ways to convert Observable to Future – creating CompletableFuture<List<T>> orCompletableFuture<T> (if we assume Observable has just one item). Let’s start from the former case, described with the following test cases: def 'should convert Observable with many items to Future of list'() { given: Observable<Integer> observable = Observable>just(1, 2, 3) when: CompletableFuture<List<Integer>> future = Futures>fromObservable(observable) then: future>get() == [1, 2, 3] } def 'should return failed Future when after few items exception was emitted'() { given: Observable<Integer> observable = Observable>just(1, 2, 3) >concatWith(Observable>error(new IllegalStateException(MSG))) when: Futures>fromObservable(observable) then: def e = thrown(Exception) e>message == MSG }Obviously Future doesn’t complete until source Observable signals end of stream. Thus Observable.never() would never complete wrapping Future, rather then completing it with empty list. The implementation is much shorter and sweeter: public static <T> CompletableFuture<List<T>> fromObservable(Observable<T> observable) { final CompletableFuture<List<T>> future = new CompletableFuture<>(); observable .doOnError(future::completeExceptionally) .toList() .forEach(future::complete); return future; } The key is Observable.toList() that conveniently converts from Observable<T> and Observable<List<T>>. The latter emits one item of List<T> type when source Observable<T> finishes. From Observable<T> to CompletableFuture<T> Special case of the previous transformation happens when we know that CompletableFuture<T> will return exactly one item. In that case we can convert it directly to CompletableFuture<T>, rather thanCompletableFuture<List<T>> with one item only. Tests first: def 'should convert Observable with single item to Future'() { given: Observable<Integer> observable = Observable.just(1) when: CompletableFuture<Integer> future = Futures.fromSingleObservable(observable) then: future.get() == 1 } def 'should create failed Future when Observable fails'() { given: Observable<String> observable = Observable.<String> error(new IllegalStateException(MSG)) when: Futures.fromSingleObservable(observable) then: def e = thrown(Exception) e.message == MSG } def 'should fail when single Observable produces too many items'() { given: Observable<Integer> observable = Observable.just(1, 2) when: Futures.fromSingleObservable(observable) then: def e = thrown(Exception) e.message.contains("too many elements") }Again the implementation is quite straightforward and almost identical: public static <T> CompletableFuture<T> fromSingleObservable(Observable<T> observable) { final CompletableFuture<T> future = new CompletableFuture<>(); observable .doOnError(future::completeExceptionally) .single() .forEach(future::complete); return future; } Helpers methods above aren’t fully robust yet, but if you ever need to convert between JDK 8 and RxJava style of asynchronous computing, this article should be enough to get you started.Reference: Converting between Completablefuture and Observable from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

Deployment Pipeline for Java EE 7 with WildFly, Arquillian, Jenkins, and OpenShift

Tech Tip #54 showed how to Arquillianate (Arquillianize ?) an existing Java EE project and run those tests in remote mode where WildFly is running on a known host and port. Tech Tip #55 showed how to run those tests when WildFly is running in OpenShift. Both of these tips used Maven profiles to separate the appropriate Arquillian dependencies in “pom.xml” and <container> configuration in “arquillian.xml” to define where WildFy is running and how to connect to it. This tip will show how to configure Jenkins in OpenShift and invoke these tests from Jenkins. Lets see it in action first!    Configuration required to connect from Jenkins on OpenShift to a WildFly instance on OpenShift is similar to that required for  connecting from local machine to WildFly on OpenShift. This configuration is specified in “arquillian.xml” and we can specify some parameters which can then be defined in Jenkins. On a high level, here is what we’ll do:Use the code created in Tech Tip #54 and #55 and add configuration for Arquillian/Jenkins/OpenShift Enable Jenkins Create a new WildFly Test instance Configure Jenkins to run tests on the Test instance Push the application to Production only if tests pass on Test instanceLets get started!Remove the existing boilerplate source code, only the src directory, from the WildFly git repo created in Tech Tip #55. mywildfly> git rm -rf src/ pom.xml rm 'pom.xml' rm 'src/main/java/.gitkeep' rm 'src/main/resources/.gitkeep' rm 'src/main/webapp/WEB-INF/web.xml' rm 'src/main/webapp/images/jbosscorp_logo.png' rm 'src/main/webapp/index.html' rm 'src/main/webapp/snoop.jsp' mywildfly> git commit . -m"removing source and pom" [master 564b275] removing source and pom 7 files changed, 647 deletions(-) delete mode 100644 pom.xml delete mode 100644 src/main/java/.gitkeep delete mode 100644 src/main/resources/.gitkeep delete mode 100644 src/main/webapp/WEB-INF/web.xml delete mode 100644 src/main/webapp/images/jbosscorp_logo.png delete mode 100644 src/main/webapp/index.html delete mode 100644 src/main/webapp/snoop.jspSet a new remote to javaee7-continuous-delivery repository: mywildfly> git remote add javaee7 https://github.com/arun-gupta/javaee7-continuous-delivery.git mywildfly> git remote -v javaee7 https://github.com/arun-gupta/javaee7-continuous-delivery.git (fetch) javaee7 https://github.com/arun-gupta/javaee7-continuous-delivery.git (push) origin ssh://54699516ecb8d41cb8000016@mywildfly-milestogo.rhcloud.com/~/git/mywildfly.git/ (fetch) origin ssh://54699516ecb8d41cb8000016@mywildfly-milestogo.rhcloud.com/~/git/mywildfly.git/ (push)Pull the code from new remote: mywildfly> git pull javaee7 master warning: no common commits remote: Counting objects: 62, done. remote: Compressing objects: 100% (45/45), done. remote: Total 62 (delta 14), reused 53 (delta 5) Unpacking objects: 100% (62/62), done. From https://github.com/arun-gupta/javaee7-continuous-delivery * branch master -> FETCH_HEAD * [new branch] master -> javaee7/master Merge made by the 'recursive' strategy. .gitignore | 6 +++ README.asciidoc | 15 ++++++ pom.xml | 197 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ src/main/java/org/javaee7/sample/MyApplication.java | 9 ++++ src/main/java/org/javaee7/sample/Person.java | 31 ++++++++++++ src/main/java/org/javaee7/sample/PersonDatabase.java | 39 ++++++++++++++ src/main/java/org/javaee7/sample/PersonResource.java | 29 +++++++++++ src/main/webapp/index.jsp | 13 +++++ src/test/java/org/javaee7/sample/PersonTest.java | 77 ++++++++++++++++++++++++++++ src/test/resources/arquillian.xml | 26 ++++++++++ 10 files changed, 442 insertions(+) create mode 100644 .gitignore create mode 100644 README.asciidoc create mode 100644 pom.xml create mode 100644 src/main/java/org/javaee7/sample/MyApplication.java create mode 100644 src/main/java/org/javaee7/sample/Person.java create mode 100644 src/main/java/org/javaee7/sample/PersonDatabase.java create mode 100644 src/main/java/org/javaee7/sample/PersonResource.java create mode 100644 src/main/webapp/index.jsp create mode 100644 src/test/java/org/javaee7/sample/PersonTest.java create mode 100644 src/test/resources/arquillian.xml This will bring all the source code, include our REST endpoints, web pages, tests, updated “pom.xml” and “arquillian.xml”. The updated “pom.xml” has two new profiles.openshift org.apache.maven.plugins maven-war-plugin 2.3 false deployments ROOTjenkins-openshift maven-surefire-plugin 2.14.1 jenkins-openshift org.jboss.arquillian.container arquillian-openshift 1.0.0.Final-SNAPSHOT testFew points to observe here:“openshift” profile is used when building an application on OpenShift. This is where the application’s WAR file is created and deployed to WildFly. A new profile “jenkins-openshift” is added that will be used by the Jenkins instance (to be enabled shortly) in OpenShift to run tests. “arquillian-openshift” dependency is the same as used in Tech Tip #55 and allows to run Arquillian tests on a WildFly instance on OpenShift. This profile refers to “jenkins-openshift” container configuration that will be defined in “arquillian.xml”.Updated “src/test/resources/arquillian.xml” has the following container: <container qualifier="jenkins-openshift"> <configuration> <property name="namespace">${env.ARQ_DOMAIN}</property> <property name="application">${env.ARQ_APPLICATION}</property> <property name="libraDomain">rhcloud.com</property> <property name="sshUserName">${env.ARQ_SSH_USER_NAME}</property> <property name="login">arungupta@redhat.com</property> <property name="deploymentTimeoutInSeconds">300</property> <property name="disableStrictHostChecking">true</property> </configuration> </container> This container configuration is similar to the one that was added in Tech Tip #55. The only difference here is that the domain name, application name, and the SSH user name are parametrized. The value of these properties is defined in the configuration of Jenkins instance and allows to run the test against a separate test node. Two more things need to be done before changes can be pushed to the remote repository. First is to create a WildFly Test instance which can be used to run the tests. This can be easily done as shown: workspaces> rhc app-create mywildflytest jboss-wildfly-8 Application Options ------------------- Domain: milestogo Cartridges: jboss-wildfly-8 Gear Size: default Scaling: noCreating application 'mywildflytest' ... Artifacts deployed: ./ROOT.war doneWildFly 8 administrator added. Please make note of these credentials:Username: adminITJt7Yh Password: yXP2mUd1w4_8 run 'rhc port-forward mywildflytest' to access the web admin area on port 9990.Waiting for your DNS name to be available ... doneCloning into 'mywildflytest'... Warning: Permanently added the RSA host key for IP address '' to the list of known hosts.Your application 'mywildflytest' is now available.URL: http://mywildflytest-milestogo.rhcloud.com/ SSH to: 546e3743ecb8d49ca9000014@mywildflytest-milestogo.rhcloud.com Git remote: ssh://546e3743ecb8d49ca9000014@mywildflytest-milestogo.rhcloud.com/~/git/mywildflytest.git/ Cloned to: /Users/arungupta/workspaces/javaee7/mywildflytestRun 'rhc show-app mywildflytest' for more details about your app. Note the domain here is milestogo, application name is mywildflytest, and SSH user name is 546e3743ecb8d49ca9000014. These will be passed to Arquillian for running the tests. Second is to enable and configure Jenkins.In your OpenShift Console, pick the “mywildfly” application and click on “Enable Jenkins” link as shown below:   Remember this is not your Test instance because all the source code lives on the instance created earlier.Provide the appropriate name, e.g. jenkins-milestogo.rhcloud.com in my case, and click on “Add Jenkins” button. This will provision a Jenkins instance, if not already there and also configure the project with a script to build and deploy the application. Note down the name and password credentials. Use the credentials to login to your Jenkins instance.Select the appropriate build, “mywildfly-build” in this case. Scroll down to the “Build” section and add the following script right after “# Run tests here” in the Execute Shell: export ARQ_DOMAIN=milestogo export ARQ_SSH_USER_NAME=546e3743ecb8d49ca9000014 export ARQ_APPLICATION=mywildflytest mvn test -Pjenkins-openshift Click on “Save” to save the configuration. This will allow to run the Arquillian tests on the Test instance. If the tests pass then the app is deployed. If the tests fail, then none of the steps after that step are executed and so the app is not deployed. Lets push the changes to remote repo now: mywildfly> git push Counting objects: 68, done. Delta compression using up to 8 threads. Compressing objects: 100% (49/49), done. Writing objects: 100% (61/61), 8.85 KiB | 0 bytes/s, done. Total 61 (delta 14), reused 0 (delta 0) remote: Executing Jenkins build. remote: remote: You can track your build at https://jenkins-milestogo.rhcloud.com/job/mywildfly-build remote: remote: Waiting for build to schedule............................................................................................Done remote: Waiting for job to complete................................................................................................................................................................................................................................................................................................................................................................................................Done remote: SUCCESS remote: New build has been deployed. remote: ------------------------- remote: Git Post-Receive Result: success remote: Deployment completed with status: success To ssh://546cef93ecb8d4ff37000003@mywildfly-milestogo.rhcloud.com/~/git/mywildfly.git/ e8f6c61..e9ad206 master -> master The number of dots indicate the wait for a particular task and will most likely vary for different runs.  And Jenkins console (jenkins-milestogo.rhcloud.com/job/mywildfly-build/1/console) shows the output as: ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.javaee7.sample.PersonTest Nov 20, 2014 2:54:56 PM org.jboss.arquillian.container.openshift.OpenShiftContainer start INFO: Preparing Arquillian OpenShift container at http://mywildflytest-milestogo.rhcloud.com Nov 20, 2014 2:55:48 PM org.jboss.arquillian.container.openshift.OpenShiftRepository push INFO: Pushed to the remote repository ssh://546e3743ecb8d49ca9000014@mywildflytest-milestogo.rhcloud.com/~/git/mywildflytest.git/ Nov 20, 2014 2:56:37 PM org.jboss.arquillian.container.openshift.OpenShiftRepository push INFO: Pushed to the remote repository ssh://546e3743ecb8d49ca9000014@mywildflytest-milestogo.rhcloud.com/~/git/mywildflytest.git/ Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.056 sec Nov 20, 2014 2:56:37 PM org.jboss.arquillian.container.openshift.OpenShiftContainer stop INFO: Shutting down Arquillian OpenShift container at http://mywildflytest-milestogo.rhcloud.com Results :Tests run: 2, Failures: 0, Errors: 0, Skipped: 0[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3:13.069s [INFO] Finished at: Thu Nov 20 14:57:34 EST 2014 [INFO] Final Memory: 10M/101M [INFO] ------------------------------------------------------------------------ + /usr/libexec/openshift/cartridges/jenkins/bin/git_ssh_wrapper.sh 546e36e5e0b8cd4e2a000007@mywildfly-milestogo.rhcloud.com 'gear stop --conditional' Warning: Permanently added 'mywildfly-milestogo.rhcloud.com,' (RSA) to the list of known hosts. Stopping gear... Stopping wildfly cart Sending SIGTERM to wildfly:418673 ... + rsync --delete-after -azO -e /usr/libexec/openshift/cartridges/jenkins/bin/git_ssh_wrapper.sh /var/lib/openshift/546e46304382ec3f29000012//.m2/ '546e36e5e0b8cd4e2a000007@mywildfly-milestogo.rhcloud.com:~/.m2/' Warning: Permanently added 'mywildfly-milestogo.rhcloud.com,' (RSA) to the list of known hosts. + rsync --delete-after -azO -e /usr/libexec/openshift/cartridges/jenkins/bin/git_ssh_wrapper.sh /var/lib/openshift/546e46304382ec3f29000012/app-root/runtime/repo/deployments/ '546e36e5e0b8cd4e2a000007@mywildfly-milestogo.rhcloud.com:${OPENSHIFT_REPO_DIR}deployments/' Warning: Permanently added 'mywildfly-milestogo.rhcloud.com,' (RSA) to the list of known hosts. + rsync --delete-after -azO -e /usr/libexec/openshift/cartridges/jenkins/bin/git_ssh_wrapper.sh /var/lib/openshift/546e46304382ec3f29000012/app-root/runtime/repo/.openshift/ '546e36e5e0b8cd4e2a000007@mywildfly-milestogo.rhcloud.com:${OPENSHIFT_REPO_DIR}.openshift/' Warning: Permanently added 'mywildfly-milestogo.rhcloud.com,' (RSA) to the list of known hosts. + /usr/libexec/openshift/cartridges/jenkins/bin/git_ssh_wrapper.sh 546e36e5e0b8cd4e2a000007@mywildfly-milestogo.rhcloud.com 'gear remotedeploy' Warning: Permanently added 'mywildfly-milestogo.rhcloud.com,' (RSA) to the list of known hosts. Preparing build for deployment Deployment id is dff28e58 Activating deployment Deploying WildFly Starting wildfly cart Found listening port Found listening port /var/lib/openshift/546e36e5e0b8cd4e2a000007/wildfly/standalone/deployments /var/lib/openshift/546e36e5e0b8cd4e2a000007/wildfly /var/lib/openshift/546e36e5e0b8cd4e2a000007/wildfly CLIENT_MESSAGE: Artifacts deployed: ./ROOT.war Archiving artifacts Finished: SUCCESS Log files for Jenkins can be viewed as shown: Nov 20, 2014 2:51:11 PM hudson.plugins.openshift.OpenShiftCloud provision INFO: Provisioning new node for workload = 2 and label = mywildfly-build in domain milestogo Nov 20, 2014 2:51:11 PM hudson.plugins.openshift.OpenShiftCloud getOpenShiftConnection INFO: Initiating Java Client Service - Configured for OpenShift Server https://openshift.redhat.com Nov 20, 2014 2:51:11 PM com.openshift.internal.client.RestService request INFO: Requesting GET with protocol 1.2 on https://openshift.redhat.com/broker/rest/api Nov 20, 2014 2:51:11 PM com.openshift.internal.client.RestService request INFO: Requesting GET with protocol 1.2 on https://openshift.redhat.com/broker/rest/user Nov 20, 2014 2:51:11 PM com.openshift.internal.client.RestService request. . .INFO: Checking availability of computer hudson.plugins.openshift.OpenShiftSlave@8ce21115 Nov 20, 2014 2:53:35 PM com.openshift.internal.client.RestService request INFO: Requesting GET with protocol 1.2 on https://openshift.redhat.com/broker/rest/domain/milestogo/application/mywildflybldr/gear_groups Nov 20, 2014 2:53:35 PM hudson.plugins.openshift.OpenShiftComputerLauncher launch INFO: Checking SSH access to application mywildflybldr-milestogo.rhcloud.com Nov 20, 2014 2:53:35 PM hudson.plugins.openshift.OpenShiftComputerLauncher launch INFO: Connecting via SSH '546e46304382ec3f29000012' 'mywildflybldr-milestogo.rhcloud.com' '/var/lib/openshift/546e393e5973ca0492000070/app-root/data/.ssh/jenkins_id_rsa' Nov 20, 2014 2:53:35 PM hudson.slaves.NodeProvisioner update INFO: mywildfly-build provisioningE successfully completed. We have now 2 computer(s) Nov 20, 2014 2:53:35 PM hudson.plugins.openshift.OpenShiftComputerLauncher launch INFO: Connected via SSH. Nov 20, 2014 2:53:35 PM hudson.plugins.openshift.OpenShiftComputerLauncher launch INFO: Exec mkdir -p $OPENSHIFT_DATA_DIR/jenkins && cd $OPENSHIFT_DATA_DIR/jenkins && rm -f slave.jar && wget -q --no-check-certificate https://jenkins-milestogo.rhcloud.com/jnlpJars/slave.jar Nov 20, 2014 2:53:42 PM hudson.plugins.openshift.OpenShiftComputerLauncher launch INFO: Slave connected. Nov 20, 2014 2:58:24 PM hudson.model.Run execute INFO: mywildfly-build #1 main build action completed: SUCCESS This shows the application was successfully deployed at mywildfly-milestogo.rhcloud.com/index.jsp and looks like as shown:  Now change “src/main/webapp/index.jsp” to show a different heading. And change  “src/test/java/org/javaee7/sample/PersonTest.java” to make one of the tests fail. Doing “git commit” and “git push” shows the following results on command line: mywildfly> git commit . -m"breaking the test" [master ff2de09] breaking the test 2 files changed, 2 insertions(+), 2 deletions(-) mywildfly> git push Counting objects: 23, done. Delta compression using up to 8 threads. Compressing objects: 100% (8/8), done. Writing objects: 100% (12/12), 771 bytes | 0 bytes/s, done. Total 12 (delta 5), reused 0 (delta 0) remote: Executing Jenkins build. remote: remote: You can track your build at https://jenkins-milestogo.rhcloud.com/job/mywildfly-build remote: remote: Waiting for build to schedule.......Done remote: Waiting for job to complete.....................................................................................................................................................................Done remote: FAILED remote: !!!!!!!! remote: Deployment Halted! remote: If the build failed before the deploy step, your previous remote: build is still running. Otherwise, your application may be remote: partially deployed or inaccessible. remote: Fix the build and try again. remote: !!!!!!!! remote: An error occurred executing 'gear postreceive' (exit code: 1) remote: Error message: CLIENT_ERROR: Failed to execute: 'control post-receive' for /var/lib/openshift/546e36e5e0b8cd4e2a000007/jenkins-client remote: remote: For more details about the problem, try running the command again with the '--trace' option. To ssh://546e36e5e0b8cd4e2a000007@mywildfly-milestogo.rhcloud.com/~/git/mywildfly.git/ d618fad..ff2de09 master -> master The key statement to note is that deployment is halted after the tests are failing. And you can verify this by revisiting mywildfly-milestogo.rhcloud.com/index.jsp and check that the updated “index.jsp” is not visible. In short, tests pass, website is updated. And tests fail, the website is not updated. So you’ve built a simple deployment pipeline for Java EE 7 using WildFly, OpenShift, Arquillian, and Jenkins!Reference: Deployment Pipeline for Java EE 7 with WildFly, Arquillian, Jenkins, and OpenShift from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: