Featured FREE Whitepapers

What's New Here?


Akka Notes – Actor Messaging – Request and Response – 3

Last time when we saw Actor messaging, we saw how fire-n-forget messages are sent (Meaning, we just send a message to the Actor but don’t expect a response from the Actor). Technically, we fire messages to Actors for its side-effects ALL THE TIME. It is by design. Other than not responding, the target Actor could ALSO do the following with that message:        Send a response back to the sender (in our case, the TeacherActor would respond with a quote back to the StudentActor OR Forward a response back to some other Actor who might be the intended audience which in turn might respond/forward/have a side-effect. Routers and Supervisors are examples of those cases (we’ll look at them very soon).Request & Response In this write-up, we’ll be focussing only on Point 1 – the request-response cycle.The picture conveys what we are trying to achieve this time. For sake of brevity, I didn’t represent the ActorSystem, Dispatcher or Mailboxes in the picture.The DriverApp sends an InitSignal message to the StudentActor. The StudentActor reacts to the InitSignal message and sends a QuoteRequest message to the TeacherActor. The TeacherActor, like we saw in the first discussion, responds with a QuoteResponse. The StudentActor just logs the QuoteResponse to the console/logger.We’ll also cook up a testcase to verify it. Let’s look at these 4 points in detail now : 1. The DriverApp sends an InitSignal message to the StudentActorBy now, you would have guessed what would the DriverApp do. Just 4 things :Initialize the ActorSystem //Initialize the ActorSystem val system = ActorSystem("UniversityMessageSystem") Create the TeacherActor //create the teacher actor val teacherRef = system.actorOf(Props[TeacherActor], "teacherActor") Create the StudentActor //create the Student Actor - pass the teacher actorref as a constructor parameter to StudentActor val studentRef = system.actorOf(Props(new StudentActor(teacherRef)), "studentActor") You’ll notice that I am passing in the ActorRef of the TeacherActor to the constructor of the StudentActor so that the StudentActor could use the ActorRef for sending messages to the TeacherActor. There are other ways to achieve this (like passing in the Props) but this method would come in handy when we look at Supervisors and Routers in the following write-ups. We’ll also be looking at child actors pretty soon but that wouldn’t semantically be the right approach here – Student creating Teacher doesn’t sound nice. Does it? Lastly, The DriverApp would then send an InitSignal to the StudentActor, so that the StudentActor could start sending the QuoteRequest message to the TeacherActor.//send a message to the Student Actor studentRef ! InitSignal That’s pretty much the DriverClass. The Thread.sleep and the ActorSystem.shutdown are just to wait for a couple of seconds for the message sending to finish before we finally shut down the ActorSystem. DriverApp.scala package me.rerun.akkanotes.messaging.requestresponseimport akka.actor.ActorSystem import akka.actor.Props import me.rerun.akkanotes.messaging.protocols.StudentProtocol._ import akka.actor.ActorRefobject DriverApp extends App {//Initialize the ActorSystem val system = ActorSystem("UniversityMessageSystem")//construct the teacher actor val teacherRef = system.actorOf(Props[TeacherActor], "teacherActor")//construct the Student Actor - pass the teacher actorref as a constructor parameter to StudentActor val studentRef = system.actorOf(Props(new StudentActor(teacherRef)), "studentActor")//send a message to the Student Actor studentRef ! InitSignal//Let's wait for a couple of seconds before we shut down the system Thread.sleep(2000)//Shut down the ActorSystem. system.shutdown()} 2. The StudentActor reacts to the InitSignal message and sends a QuoteRequest message to the TeacherActor AND 4. The StudentActor receives the QuoteResponse from TeacherActor and just logs to the console/logger Why did I combine Points 2 and 4? Because it is so simple you’ll hate me if I separate them.So, Point 2 – the StudentActor receives the InitSignal message from the DriverApp and sends QuoteRequest to the TeacherActor. def receive = { case InitSignal=> { teacherActorRef!QuoteRequest } ... ... That’s it !!! Point 4 – The StudentActor logs the message that it receives from the TeacherActor.Just, as promised : case QuoteResponse(quoteString) => { log.info ("Received QuoteResponse from Teacher") log.info(s"Printing from Student Actor $quoteString") } I am sure you’d agree that it almost looks like pseudocode now. So, the entire StudentActor class looks like : StudentActor.scala package me.rerun.akkanotes.messaging.requestresponseimport akka.actor.Actor import akka.actor.ActorLogging import me.rerun.akkanotes.messaging.protocols.TeacherProtocol._ import me.rerun.akkanotes.messaging.protocols.StudentProtocol._ import akka.actor.Props import akka.actor.ActorRefclass StudentActor (teacherActorRef:ActorRef) extends Actor with ActorLogging {def receive = { case InitSignal=> { teacherActorRef!QuoteRequest }case QuoteResponse(quoteString) => { log.info ("Received QuoteResponse from Teacher") log.info(s"Printing from Student Actor $quoteString") } } } 3. The TeacherActor responds with a QuoteResponse. This is the exact same code as we saw in the fire-n-forget write-up. The TeacherActor receives a QuoteRequest message and sends QuoteResponse back. TeacherActor.scala package me.rerun.akkanotes.messaging.requestresponseimport scala.util.Randomimport akka.actor.Actor import akka.actor.ActorLogging import akka.actor.actorRef2Scala import me.rerun.akkanotes.messaging.protocols.TeacherProtocol._class TeacherActor extends Actor with ActorLogging {val quotes = List( "Moderation is for cowards", "Anything worth doing is worth overdoing", "The trouble is you think you have time", "You never gonna know if you never even try")def receive = {case QuoteRequest => {import util.Random//Get a random Quote from the list and construct a response val quoteResponse = QuoteResponse(quotes(Random.nextInt(quotes.size)))//respond back to the Student who is the original sender of QuoteRequest sender ! quoteResponse} } } Testcases Now, our testcase would simulate the DriverApp. Since, the StudentActor just logs the message and we won’t be able to assert on the QuoteResponse itself, we’ll just assert the presence of the log message in the EventStream (just like we talked last time). So, our testcase looks like : "A student" must {"log a QuoteResponse eventually when an InitSignal is sent to it" in {import me.rerun.akkanotes.messaging.protocols.StudentProtocol._val teacherRef = system.actorOf(Props[TeacherActor], "teacherActor") val studentRef = system.actorOf(Props(new StudentActor(teacherRef)), "studentActor")EventFilter.info (start="Printing from Student Actor", occurrences=1).intercept{ studentRef!InitSignal } } } Code The entire project could be downloaded from github here. Next up, we’ll see how to use schedulers in Akka and monitoring your Akka app using Kamon.Reference: Akka Notes – Actor Messaging – Request and Response – 3 from our JCG partner Arun Manivannan at the Rerun.me blog....

Project Lifecycle in Teamed.io

In addition to being a hands-on programmer, I’m also co-founder and CTO of Teamed.io, a custom software development company. I play the role of a technical and management leader in all projects we work with. I wrote this article for those who’re interested in hiring me and/or my team. This article will demonstrate what happens from day one until the end of the project, when you choose to work with us. You will see below that our methods of software development seriously differ from what many other teams are using. I personally pay a lot of attention to quality of code and quality of the internal processes that connect our team. There are four phases in every project I work with in Teamed.io:Thinking. Here we’re trying to understand: What is the problem that the product is going to solve? We’re also investigating the product’s boundaries — who will work with the software (actors) and how will they work with it (user stories). Deliverables: specification. Duration: from 2 days up to 3 weeks. Participants: product owner, analyst(s), architect, project manager. Building. Here the software architect is creating a proof-of-concept (aka an MVP or prototype or a skeleton). It is a one-man job that is done almost without any interaction with anyone else. The architect builds the product according to the specification in a very limited time frame. The result will have multiple bugs and open ends, but it will implement the main user story. The architect also configures continuous integration and delivery pipelines. Deliverables: working software. Duration: 2-5 days. Participants: architect. Fixing. At this phase we are adding all the meat to the skeleton. This phase takes most of the time and budget and involves many participants. In some projects we invite up to 50 people to work, at the same time. Since we treat all inconsistencies as bugs, this phase is mostly about finding, reporting and fixing bugs, in order to stabilize the product and get it ready for market launch. We increment and release the software multiple times a day, preferably to its user champions. Deliverables: bug fixes via pull requests. Duration: from weeks to months. Participants: programmer(s), designer(s), tester(s), code reviewer(s), architect, project manager. Using. At this final phase we are launching the product to its end-users, and collecting their feedback (both positive and negative). Everything they are reporting back to us is being registered as a bug. Then, we categorize the bugs and fix them. This phase may take years, but it never involves active implementation of new features. Deliverables: bug fixes via pull requests. Duration: months. Participants: programmer(s), code reviewer(s), project manager.The biggest (i.e., longest and most expensive) phase is, of course, Fixing. It usually takes the majority of time (over 70%). However, the most important and risky phase is the first one — Thinking. A mistake made during Thinking will cost much more than a mistake made later. Thinking Thiking is the first and the most important phase. First, we give a name to the project and create a Github repository. We try to keep all our projects (both open source and commercial) in Github. Mostly because the platform is very popular, very powerful, and really cheap ($7/mo for a set of 5 private projects). We also keep all communication in the Github issue tracker. Then, we create a simple half-page SRS document (Software Requirements Specification). Usually this is done right inside the source code, but sometimes in the Github wiki. What’s important is that the document should be under version control. We will modify it during the course of the project, very intensively. The SRS should briefly identify main “actors” of the system and define the product scope. Even though it is only half a page, the creation of this initial SRS document is the most important and the most expensive task in the entire project. We pay a lot of attention to this step. Usually this document is written by myself in a direct communication with the project sponsor. We can’t afford a mistake at this step. Then, we invite a few system analysts to the project. These guys are responsible for turning our initial SRS into a more complete and detailed specification. They start by asking questions, submitting them one by one as Github issues. Every question is addressed to the product owner. Using his answers, system analysts modify the SRS document. This article explains how Requs helps us in this process: Incremental Requirements With Requs At the end of the Thinking phase we estimate the size of the project, in lines of code. Using lines of code, we can roughly estimate a budget. I stay personally involved in the project during the entire Thinking phase. Building This is a one-man job for an architect. Every project we work with has an architect who is personally responsible for the quality and all technical decisions made there. I try to play this role in most projects. The Building phase is rather straight forward. I have to implement the solution according to the SRS, in a few working days. No matter how big the idea and how massive the planning development, I still have to create (build from scratch!) the product in, say, three days. Besides building the software itself, I have to configure all basic DevOps processes, including: 1) automated testing and quality control, 2) deploying and releasing pipelines, 3) repository of artifacts, 4) continuous integration service, etc. The result of this phase is a working software package, deployable to its destination and available for testers. Technical quality requirements are also defined at this phase. Fixing Now it’s time to build a distributed team of programmers. First, we invite those who’ve worked in other projects before and have already have proven their quality. Very often we invite new people, finding them through StackOverflow, Github, oDesk, and other sources. An average team size of an average project is 10-20 programmers. At this phase, we understand any inconsistency as a bug. If something is not clear in the documentation, or if something can be refactored for better readability, or if a function can be improved for higher performance — it is a bug to us. And bugs are welcome in our projects. We encourage everybody to report as many bugs as possible. This is how we achieve high quality. That is why the phase is called Fixing, after all. We are reporting bugs and fixing them. Hundreds of bugs. Sometimes thousands. The product grows in front of our very eyes, because after every bug fix we re-deploy the entire product to the production platform. Every bug is reported, classified, discussed, and fixed in its own Github ticket and its own Git branch. We never allow anyone to just commit to the master branch — all changes must pass through our quality controls and be merged into master by rultor.com, our merging bot. Also important to mention is that all communications with the product owner and between programmers happen only through Github issues. We never use any chats, Skype, emails or conferencing software. We communicate only through tickets and comments in Github. Using This is the final phase and it can take quite a long time. By now, the product is ready and is launched to the market. But we still receive bug reports and feature request from the product owner, and we still fix them through the same process flow as in the Fixing phase. We try to keep this phase as quiet as possible, in terms of the amount of bugs reported and fixed. Thanks to our intensive and pro-active bug finding and fixing in the previous phase, we usually have very few problems at the Using phase. And big feature requests? At this phase, we usually try to convert them into new projects and develop them separately, starting again from Thinking.Reference: Project Lifecycle in Teamed.io from our JCG partner Yegor Bugayenko at the About Programming blog....

Control Sphero using Temperature Sensor in Android

One of the most interesting topic around Android it is how we can connect our smart phones to other devices or smart devices to get information from them or to control them. In this post I want to introduce a new way to use our Android phone and explain how we can use it to control Sphero ball. Introduction In this Android project, I want to describe how we can integrate the temperature sensor inside the smart phone to control the Sphero ball color. In other words, I want to change the color ball according to the temperature measured by the smart phone even if the smart phones is in the stand by mode or the Activity is not in the foreground. This is an interesting project because it can be used to describe some important concept:Android Service Android Broadcast receiver Sensors Alarm Managerand finally, but not less important, how to connect and use Sphero ball with its SDK. What we want to design an app like shown below:Design the app Now we know what we want to obtain, we can mix Android features and components to get it. We need a component that monitors the temperature sensor and another one that connect to the Sphero. As said before, we want to make this component working even if the app is not in the foreground or the smart phone is not active. So we need a Service, because this Android component can fulfill our requirements. We need, then,An Activity that is the app UI A Service that monitors the temperature sensor A Service that connects to the ball and control its colorLooking at the pic below, we can notice that the UI Activity starts two services and listens for events coming from these services, in more details the Activity set up an alarm that is used to start the Temperature Sensor Service so that we won’t drain the battery. The alarm can be configured to start every fixed amount of time. Every time the Temperature Sensor starts it measure the environment temperature using the smart phone sensor and broad cast the value. The UI Activity listens these events and shows the value to the UI, at the same time the Ball Connection Service listens for the same event and as soon as it gets the event, this service calculates the color components (R,G,B) and set the ball color. Create Temperature Sensor Service: code Now we have an overview about the main components in our app, we can start coding it. The first element we want to code is the Temperature Sensor service that reads the current temperature. As we know we need a service: public class SensorService extends Service implements SensorEventListener { ... } we must implement SensorEventListener to listen to the sensor events, then in onStartCommand we register this class as listener: @Override public int onStartCommand(Intent intent, int flags, int startId) { sManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE); sensor = sManager.getDefaultSensor(Sensor.TYPE_AMBIENT_TEMPERATURE); sManager.registerListener(this, sensor, SensorManager.SENSOR_DELAY_NORMAL); return Service.START_STICKY; } finally when we get notified about the new temperature value we handle it: @Override public void onSensorChanged(SensorEvent event) { // We get the temperature and notify it to the Activity float temp = event.values[0]; Intent i = new Intent(); i.setAction(TEMP_BALL_SENSOR); i.putExtra("value", temp); sendBroadcast(i);// stop listener if (sManager != null) sManager.unregisterListener(this);// stop service stopSelf(); } At line 15,we stop the service because we don’t want to read all the times the values to not drain the battery. Create Ball Connection Service: code The other service we have to implement is to handle Sphero connection via bluetooth. You can refer to Sphero SDK to have more information. We want to handle the connection in the Android service: public class BallConnectionService extends Service { .. } now in the onStartCommand we start connecting to the Sphero and at the same time we start listening for incoming temperature event (line 8). @Override public int onStartCommand(Intent intent, int flags, int startId) { if (mySphero == null) doConnection();IntentFilter rec = new IntentFilter(); rec.addAction(SensorService.TEMP_BALL_SENSOR); registerReceiver(receiver, rec);return Service.START_STICKY; } in doConnection we make the real connection: private void doConnection() {sendStatus(CONNECTING); createNotification("Connecting...");RobotProvider.getDefaultProvider().addConnectionListener(new ConnectionListener() { @Override public void onConnected(Robot robot) { Log.d("Temp", "Connected"); mySphero = (Sphero) robot; sendStatus(CONNECTED); createNotification("Connected"); }@Override public void onConnectionFailed(Robot robot) { Log.d("Temp", "Conection failed"); sendStatus(FAILED); }@Override public void onDisconnected(Robot robot) { Log.d("Temp", "Disconnected"); mySphero = null; createNotification("Disconnected!"); } });RobotProvider.getDefaultProvider().addDiscoveryListener(new DiscoveryListener() { @Override public void onBluetoothDisabled() { Log.d("Temp", "BT Disabled"); }@Override public void discoveryComplete(List<Sphero> spheros) { Log.d("Temp", "Found ["+spheros.size()+"]"); }@Override public void onFound(List<Sphero> spheros) { // Do connection Log.d("Temp", "Found ball"); RobotProvider.getDefaultProvider().connect(spheros.get(0)); } });boolean success = RobotProvider.getDefaultProvider().startDiscovery(this); } The code seems complex but it is really simple if you look at it carefully. We start broadcasting the event that we are trying to connect to the Sphero (line 3), then, using Sphere API, we register a listener to know when the connection is established and broadcast a new event that the connection is active, at the end of this method we start discovering if new Sphero in around and ready to connect. The last part of the service is used for listening to the temperature event and set the color ball: private BroadcastReceiver receiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { float val = intent.getFloatExtra("value", 0); Log.d("Temp", "Received value ["+val+"]"); if (mySphero != null) { // send color to spheroint red = (int) (255 * val / RANGE) * (val > 10 ? 1 : 0); int green = (int) ( (255 * (RANGE - Math.abs(val)) / RANGE) * (val < 10 ? 0.2 : 1) ); int blue = (int) (255 * (10 - val) / 10) * (val < 10 ? 1 : 0);mySphero.setColor(red, green, blue); } } ;  Create the Activity The last step is creating the Activity that controls the UI and starts and stops the service. We provide two action bar buttons: one to start the services and another one to stop them. If we touch the start service, we start the AlarmManager to schedule when to run our service: PendingIntent pi = createAlarm(); AlarmManager scheduler = (AlarmManager) getSystemService(Context.ALARM_SERVICE); scheduler.setInexactRepeating(AlarmManager.RTC_WAKEUP, System.currentTimeMillis(), 60 * 1000, pi); Intent i1 = new Intent(this, BallConnectionService.class); startService(i1); In this simple code, we create a PendingIntent and get a reference to the AlarmManager, finally we schedule the alarm so that the service can be started after a fixed amount of time. (line 3). In createAlarm() method we setup the intent: private PendingIntent createAlarm() { AlarmManager scheduler = (AlarmManager) getSystemService(Context.ALARM_SERVICE); Intent intent = new Intent(this,SensorService.class ); PendingIntent scheduledIntent = PendingIntent.getService(getApplicationContext(), 0, intent, PendingIntent.FLAG_UPDATE_CURRENT); return scheduledIntent; } Finally, we have to create two receivers that listen for event coming from temperature sensor and ball connection services: private BroadcastReceiver sensorReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { float val = intent.getFloatExtra("value", 0); tempView.setText(String.format("%.1f",val)); } }; At line 5, we show the current temperature, while for the ball service we have: private BroadcastReceiver ballReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { int status = intent.getIntExtra("status", -1000);Log.d("Temp", "Value Status ["+status+"]"); if (status == BallConnectionService.CONNECTING) { tempView.startAnimation(pulseAnim); Toast.makeText(MyActivity.this, "Connecting...", Toast.LENGTH_SHORT).show(); } else if (status == BallConnectionService.CONNECTED) { tempView.clearAnimation(); Intent i = new Intent(MyActivity.this, SensorService.class); startService(i); Toast.makeText(MyActivity.this, "Connected", Toast.LENGTH_LONG).show(); } else if (status == BallConnectionService.FAILED) { Toast.makeText(MyActivity.this, "Connection failed. Try again pressing start button", Toast.LENGTH_LONG).show(); }} };Source code available soon @ github.Reference: Control Sphero using Temperature Sensor in Android from our JCG partner Francesco Azzola at the Surviving w/ Android blog....

Stateless Spring Security Part 1: Stateless CSRF protection

Today with a RESTful architecture becoming more and more standard it might be worthwhile to spend some time rethinking your current security approaches. Within this small series of blog posts we’ll explore a few relatively new ways of solving web related security issues in a Stateless way. This first entry is about protecting your website against Cross-Site Request Forgery (CSRF). Recap: What is Cross-Site Request Forgery? CSRF attacks are based on lingering authentication cookies. After being logged in or otherwise identified as a unique visitor on a site, that site is likely to leave a cookie within the browser. Without explicitly logging out or otherwise removing this cookie, it is likely to remain valid for some time. Another site can abuse this by having the browser make (Cross-Site) requests to the site under attack. For example including some javascript to make a POST to “http://siteunderattack.com/changepassword?pw=hacked” tag will have the browser make that request, attaching any (authentication) cookies still active for that domain to the request! Even though the Single-Origin Policy (SOP) does not allow the malicious site access to any part of the response. As probably clear from the example above, the harm is already be done if the requested URL triggers any side-effects (state changes) in the background.  Common approach The commonly used solution would be to introduce the requirement of a so-called shared secret CSRF-token and make it known to the client as part of a previous response. The client is then required to ping it back to the server for any requests with side-effects. This can be done either directly within a form as hidden field or as a custom HTTP header. Either way other sites cannot successfully produce requests with the correct CSRF-token included, because SOP prevents responses from the server from being read cross-site. The issue with this approach is that the server needs to remember the value of each CSRF-token for each user inside a session. Stateless approaches 1. Switch to a full and properly designed JSON based REST API. Single-Origin Policy only allows cross-site HEAD/GET and POSTs. POSTs may only be one of the following mime-types: application/x-www-form-urlencoded, multipart/form-data, or text/plain. Indeed no JSON! Now considering GETs should never ever trigger side-effects in any properly designed HTTP based API, this leaves it up to you to simply disallow any non-JSON POST/PUT/DELETEs and all is well. For a scenario with uploading files (multipart/form-data) explicit CSRF protection is still needed. 2. Check the HTTP Referer header. The approach from above could be further refined by checking for the presence and content of a Referer header for scenarios that are still susceptible, such as multipart/form-data POSTs. This header is used by browsers to designate which exact page (url) triggered a request. This could easily be used to check against the expected domain for the site. Note that if opting for such a check you should never allow requests without the header present. 3. Client-side generated CSRF-tokens. Have the clients generate and send the same unique secret value in both a Cookie and a custom HTTP header. Considering a website is only allowed to read/write a Cookie for its own domain, only the real site can send the same value in both headers. Using this approach all you server has to do is check if both values are equal, on a stateless per request basis! Implementation Focussing on the 3rd approach for explicit but Stateless CSRF-token based security, lets see how this looks like in code using Spring Boot and Spring Security. Within Spring Boot you get some nice default security settings which you can fine tune using your own configuration adapter. In this case all that is needed is to disable the default csrf behavior and add own own StatelessCSRFFilter: Customize csrf protection @EnableWebSecurity @Order(1) public class StatelessCSRFSecurityConfig extends WebSecurityConfigurerAdapter {@Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable().addFilterBefore( new StatelessCSRFFilter(), CsrfFilter.class); } } And here is the implementation of the StatelessCSRFFilter: Custom CSRF filter public class StatelessCSRFFilter extends OncePerRequestFilter {private static final String CSRF_TOKEN = "CSRF-TOKEN"; private static final String X_CSRF_TOKEN = "X-CSRF-TOKEN"; private final RequestMatcher requireCsrfProtectionMatcher = new DefaultRequiresCsrfMatcher(); private final AccessDeniedHandler accessDeniedHandler = new AccessDeniedHandlerImpl();@Override protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { if (requireCsrfProtectionMatcher.matches(request)) { final String csrfTokenValue = request.getHeader(X_CSRF_TOKEN); final Cookie[] cookies = request.getCookies();String csrfCookieValue = null; if (cookies != null) { for (Cookie cookie : cookies) { if (cookie.getName().equals(CSRF_TOKEN)) { csrfCookieValue = cookie.getValue(); } } }if (csrfTokenValue == null || !csrfTokenValue.equals(csrfCookieValue)) { accessDeniedHandler.handle(request, response, new AccessDeniedException( "Missing or non-matching CSRF-token")); return; } } filterChain.doFilter(request, response); }public static final class DefaultRequiresCsrfMatcher implements RequestMatcher { private final Pattern allowedMethods = Pattern.compile("^(GET|HEAD|TRACE|OPTIONS)$");@Override public boolean matches(HttpServletRequest request) { return !allowedMethods.matcher(request.getMethod()).matches(); } } } As expected the Stateless version doesn’t do much more than a simple equals() on both header values. Client-side implementation Client-side implementation is trivial as well, especially when using AngularJS. AngularJS already comes with build-in CSRF-token support. If you tell it what cookie to read from, it will automatically put and send its value into a custom header of your choosing. (The browser taking care of sending the cookie header itself.) You can override AngularJS’s default names (XSRF instead of CSRF) for these as followed: Set proper token names $http.defaults.xsrfHeaderName = 'X-CSRF-TOKEN'; $http.defaults.xsrfCookieName = 'CSRF-TOKEN'; Furthermore if you want to generate a new token value per request you could add a custom interceptor to the $httpProvider as followed: Interceptor to generate cookie app.config(['$httpProvider', function($httpProvider) { //fancy random token, losely after https://gist.github.com/jed/982883 function b(a){return a?(a^Math.random()*16>>a/4).toString(16):([1e16]+1e16).replace(/[01]/g,b)};$httpProvider.interceptors.push(function() { return { 'request': function(response) { // put a new random secret into our CSRF-TOKEN Cookie before each request document.cookie = 'CSRF-TOKEN=' + b(); return response; } }; }); }]); You can find a complete working example to play with at github. Make sure you have gradle 2.0 installed and simply run it using “gradle build” followed by a “gradle run”. If you want to play with it in your IDE like eclipse, go with “gradle eclipse” and just import and run it from within your IDE (no server needed). Disclaimer Sometimes the classical CSRF-tokens are wrongfully deemed a solution against replay or brute-force attacks. The stateless approaches listed here does not cover this type of attack. Personally I feel both type of attacks should be covered at another level, such as using https and rate-limiting. Which I both consider a must for any data-entry on a public web site!Reference: Stateless Spring Security Part 1: Stateless CSRF protection from our JCG partner Robbert van Waveren at the JDriven blog....

The future is Micro Service Architectures on Apache Karaf

This is a guest blog post by Jamie Goodyear (blog, @icbts). He is an open source advocate, Apache developer, and computer systems analyst with Savoir Technologies; he has designed, critiqued, and supported architectures for large organizations worldwide. He holds a Bachelor of Science degree in Computer Science from Memorial University of Newfoundland.      Jamie has worked in systems administration, software quality assurance, and senior software developer roles for businesses ranging from small start-ups to international corporations. He has attained committer status on Apache Karaf, ServiceMix, and Felix, Project Management Committee member on Apache Karaf, and is an Apache Software Foundation member. His first printed publication was co-authoring Instant OSGi Starter, Packt Publishing, with Johan Edstrom followed by Learning Apache Karaf, Packt Publishing, with Johan Edstrom and Heath Kesler. His third and latest publication is Apache Karaf Cookbook, Packt Publishing, with Johan Edstrom, Heath Kesler, and Achim Nierbeck. I like Micro Service Architectures. There are many descriptions of what constitutes a micro service, and many specifications that could be described as following the pattern. In short, I tend to describe them as being the smallest unit of work that an application can do as a service for others. Bringing together these services we’re able to build larger architectures that are modular, light weight, and resilient to change. From the point of view of modern systems architecture the ability to provision small applications with full life cycle control is our idea platform. Operators need only deploy the services they need, updating them in place, spinning up additional instances as required. Another way of describing this is as Applications as a Service (AaaS). Take particular small services such as Apache Camel routes or Apache CXF endpoints and bring them up and down with out destroying the whole application. Apache Karaf IS the platform to do micro services. To make micro services easier, Karaf provides many helpful features right out of the box;A collection of well tested libraries and frameworks to help taken the guess work out of assembling a platform for your applications. Provisioning of libraries or applications via a variety of mechanisms such as Apache Maven. Feature descriptors to allow deployment of related services & resources together. Console and web-based commands to help make fine grained control easy, and Simplified integration testing via Pax Exam.One of my favourite micro service patterns is to use Apache Camel with a Managed Service Factory (MSF) on Apache Karaf. Camel provides a simple DSL for wiring together Enterprise Integration Patterns, moving data from endpoint A to endpoint B as an example. A Managed Service Factory is an Modular Pattern for configuration driven deployments of your micro services – it ties together ConfigAdmin, the OSGi Service Registry, and our application code.For instance, a user could create a configuration to wire their Camel route, using a MSF, unique PIDs will be generated per a configuration. This pattern is truly powerful. Create 100 configurations, and 100 corresponding micro services (Camel routes) will be instantiated. Only one set of code however requires maintenance. Let’s take a close look at the implementation of the Managed Service Factory. The ManagedServiceFactory is responsible for managing instantiations (configurationPid), creating or updating values of instantiated services, and finally, cleaning up after service instantiations. Read more on the ManagedServiceFactory API. public class HelloFactory implements ManagedServiceFactory {@Override public String getName() { return configurationPid; }@Override public void updated(String pid, Dictionary dict) throws ConfigurationException { // Create a dispatching engine for given configuration. }@Override public void deleted(String pid) { // Delete corresponding dispatch engine for given configuration. }//We wire in blueprint public void init() {} public void destroy() {} public void setConfigurationPid(String configurationPid) {} public void setBundleContext(BundleContext bContext) {} public void setCamelContext(CamelContext camelContext) {} } We override the given ManageServiceFactory interface to work with DispatchEngines. The DispatchEngine is a simple class that contains code for instantiating a Camel route using a given configuration. public class HelloDispatcher {public void start() { // Create routeBuilder using configuration, add to CamelContext. // Here ‘greeting’ and ‘name’ comes from configuration file.from(“timer://helloTimer?fixedRate=true.=1000"). routeId("Hello " + name). log(greeting + " " + name); }public void stop() { // remove route from CamelContext. } }When we deploy these classes as a bundle into Karaf we obtain a particularly powerful Application as a Service. Each configuration we provision to the service instantiates a new Camel router (these configuration files quite simply consist of Greeting and Name). Camel’s Karaf commands allow for fine grained control over these routes, providing the operator with simple management. Complete code for the above example is available via github, and is explored in detail in Packt Publishing’s Apache Karaf Cookbook. Micro Service Architectures such as above unleash the power of OSGi for common applications such as a Camel route or CXF endpoint. These are not the only applications which benefit however. I’d like to share one of our Karaf success stories that highlights how Apache Karaf helped bring structure to an existing large scale micro service based project. Imagine having hundreds of bundles distributed over dozens of interconnected projects essentially being deployed in a plain OSGi core and left to best luck to successfully boot properly. This is the situation that OpenDaylight, a platform for SDN and NFV, found themselves in a few months ago.Using Karaf Feature descriptors each project was able to organize their dependencies, bundles, and other resources into coherent structures. Custom commands were developed to interact with their core services. Integration testing of each project into the project’s whole were automated. Finally all of these projects have been integrated into their own custom distribution. Their first Karaf-based release, Helium, is due out very soon. We’re all looking forward to welcoming the SDN & NFV community to Karaf. While Apache Karaf 3.0.x line is maintained as our primary production target, the community has been busy as ever developing the next generation of Karaf containers. The 4.0.x line will ship with OSGi Rev5 support via Felix 4.4.1 and Equinox 3.9.1-v20140110-1610, and a completely refactored internal framework based on Declarative Services instead of Blueprint. From a users point of view these changes will yield a smaller, more efficient Karaf core. There will be a Blueprint feature present in Karaf so that you can easily install Blueprint based applications. You will always be capable of using Blueprint in Karaf. So the main difference from a user perspective is that you’d need to depend on the Blueprint service if you need it. This has been a very brief overview of Micro Service Architectures on Apache Karaf, and Karaf’s future direction. I’d suggest anyone interested in Micro Services to visit the OSGi Alliance website, and join Apache Karaf community. For those whom whom would like to dive into an advanced custom Karaf distribution have a look into Aetos. Apache Karaf is also part of JBoss Fuse.Reference: The future is Micro Service Architectures on Apache Karaf from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

Identity Anti-patterns: Federation Silos and Spaghetti Identity

A recent research done by the analyst firm Quocirca confirms that many businesses now have more external users than internal ones: in Europe 58 percent transact directly with users from other businesses and/or consumers; for the UK alone the figure is 65 percent. If you look at the history, most enterprises grow today via acquisitions, mergers and partnerships. In U.S only, mergers and acquisitions volume totaled to $865.1 billion in the first nine months of 2013, according to Dealogic. That’s a 39% increase over the same period a year ago — and the highest nine-month total since 2008. What does this mean to enterprise identity management ?   You would have to work with multiple heterogeneous user stores – authentication protocols – legacy systems and many more. SAML, OpenID, OpenID Connect, WS-Federation all support identity federation – cross domain authentication. But, can we always expect all the parties in a federation use case to support SAML, OpenID or OpenID Connect ? Most of the federation systems we see today are in silos. It can be a silo of SAML federation, a silo of OpenID Connect federation or a silo of OpenID federation.Even in a given federation silo how do you scale with increasing number of service providers and identity providers? Each service provider has to trust each identity provider and this leads into the Spaghetti Identity anti-pattern.Federation Silos and Spaghetti Identity are two anti-patterns directly addressed by the Identity Bus pattern. With Identity Bus, a given service provider is not coupled to a given identity provider – and also not coupled to a given federation protocol. A user should be able to login into a service provider which accepts only SAML 2.0 tokens with an identity provider who only issues OpenID Connect tokens. The identity bus acts as the middle-man who mediates and transforms identity tokens between heterogeneous identity protocols. Let’s see some of the benefits of the Identity Bus pattern.Introducing a new service provider is extremely easy. You only need to register the service provider at the identity bus and from the there pick which identity providers it trusts. No need to add the service provider configuration to each and every identity provider. Removing an existing service provider is extremely easy. You only need to remove the service provider from the identity bus. No need to remove the service provider from each and every identity provider. Introducing an new identity provider is extremely easy. You only need to register the identity provider at the identity bus. It will be available for any service provider. Removing an existing identity provider is extremely easy. You only need to remove the identity provider from the identity bus. Enforcing new authentication protocols is extremely easy. Say you need to authenticate users with both the username/password and duo-security (SMS based authentication) – you only need to add that capability to the identity bus and from there you pick the required set of authentication protocols against a given service provider at the time of service provider registration. Each service provider can pick how it wants to authenticate users at the identity bus. Claim transformations. Your service provider may read user’s email address from the http://sp1.org/claims/email attribute id – but the identity provider of the user may send it as http://idp1.org/claims/emai. Identity bus can transform the claims it receives from the identity provider to the format expected by the service provider. Role mapping. Your service provider needs to authorize users once they are logged in. What the user can do at the identity provider is different from what the same user can do at the service provider. User’s roles from the identity provider define what he can do at the identity provider. Service provider’s roles define the things a user can do at the service provider. Identity bus is capable of mapping identity provider’s roles to the service provider’s roles. For example a user may bring idp-admin role from his identity provider – in a SAML response – then the identity bus will find the mapped service provider role corresponding to this, say sp-admin, and will add that into the SAML response returning back to the service provider from the identity bus. Just-in-time provisioning. Since identity bus is at the middle of all identity transactions – it can provision all external user identities to an internal user store. Centralized monitoring and auditing. Introducing a new federation protocol needs minimal changes. If you have a service provider or an identity provider, which supports a proprietary federation protocol, then you only need to add that capability to the identity bus. No need to implement it at each and every identity provider or service provider.WSO2 Identity Server is an open source Identity and Entitlement management server, which supports SAML 2.0, OpenID, OAuth 2.0, OpenID Connect, XACML 3.0, SCIM, WS-Federation (passive) and many other identity federation patterns. Following diagram shows the high-level architecture of  WSO2 Identity Server – which supports the Identity Bus pattern.Reference: Identity Anti-patterns: Federation Silos and Spaghetti Identity from our JCG partner Prabath Siriwardena at the Facile Login blog....

DI Containers are Code Polluters

While dependency injection (aka, “DI”) is a natural technique of composing objects in OOP (known long before the term was introduced by Martin Fowler), Spring IoC, Google Guice, Java EE6 CDI, Dagger and other DI frameworks turn it into an anti-pattern. I’m not going to discuss obvious arguments against “setter injections” (like in Spring IoC) and “field injections” (like in PicoContainer). These mechanisms simply violate basic principles of object-oriented programming and encourage us to create incomplete, mutable objects, that get stuffed with data during the course of application execution. Remember: ideal objects must be immutable and may not contain setters.   Instead, let’s talk about “constructor injection” (like in Google Guice) and its use with dependency injection containers. I’ll try to show why I consider these containers a redundancy, at least. What is Dependency Injection? This is what dependency injection is (not really different from a plain old object composition): public class Budget { private final DB db; public Budget(DB data) { this.db = data; } public long total() { return this.db.cell( "SELECT SUM(cost) FROM ledger" ); } } The object data is called a “dependency”. A Budget doesn’t know what kind of database it is working with. All it needs from the database is its ability to fetch a cell, using an arbitrary SQL query, via method cell(). We can instantiate a Budget with a PostgreSQL implementation of the DB interface, for example: public class App { public static void main(String... args) { Budget budget = new Budget( new Postgres("jdbc:postgresql:5740/main") ); System.out.println("Total is: " + budget.total()); } } In other words, we’re “injecting” a dependency into a new object budget. An alternative to this “dependency injection” approach would be to let Budget decide what database it wants to work with: public class Budget { private final DB db = new Postgres("jdbc:postgresql:5740/main"); // class methods } This is very dirty and leads to 1) code duplication, 2) inability to reuse, and 3) inability to test, etc. No need to discuss why. It’s obvious. Thus, dependency injection via a constructor is an amazing technique. Well, not even a technique, really. More like a feature of Java and all other object-oriented languages. It’s expected that almost any object will want to encapsulate some knowledge (aka, a “state”). That’s what constructors are for. What is a DI Container? So far so good, but here comes the dark side — a dependency injection container. Here is how it works (let’s use Google Guice as an example): import javax.inject.Inject; public class Budget { private final DB db; @Inject public Budget(DB data) { this.db = data; } // same methods as above } Pay attention: the constructor is annotated with @Inject. Then, we’re supposed to configure a container somewhere, when the application starts: Injector injector = Guice.createInjector( new AbstractModule() { @Override public void configure() { this.bind(DB.class).toInstance( new Postgres("jdbc:postgresql:5740/main") ); } } ); Some frameworks even allow us to configure the injector in an XML file. From now on, we are not allowed to instantiate Budget through the new operator, like we did before. Instead, we should use the injector we just created: public class App { public static void main(String... args) { Injection injector = // as we just did in the previous snippet Budget budget = injector.getInstance(Budget.class); System.out.println("Total is: " + budget.total()); } } The injection automatically finds out that in order to instantiate a Budget it has to provide an argument for its constructor. It will use an instance of class Postgres, which we instantiated in the injector. This is the right and recommended way to use Guice. There are a few even darker patterns, though, which are possible but not recommended. For example, you can make your injector a singleton and use it right inside the Budget class. These mechanisms are considered wrong even by DI container makers, however, so let’s ignore them and focus on the recommended scenario. What Is This For? Let me reiterate and summarize the scenarios of incorrect usage of dependency injection containers:Field injection Setter injection Passing injector as a dependency Making injector a global singletonIf we put all of them aside, all we have left is the constructor injection explained above. And how does that help us? Why do we need it? Why can’t we use plain old new in the main class of the application? The container we created simply adds more lines to the code base, or even more files, if we use XML. And it doesn’t add anything, except an additional complexity. We should always remember this if we have the question: “What database is used as an argument of a Budget?” The Right Way Now, let me show you a real life example of using new to construct an application. This is how we create a “thinking engine” in rultor.com (full class is in Agents.java): final Agent agent = new Agent.Iterative( new Array( new Understands( this.github, new QnSince( 49092213, new QnReferredTo( this.github.users().self().login(), new QnParametrized( new Question.FirstOf( new Array( new QnIfContains("config", new QnConfig(profile)), new QnIfContains("status", new QnStatus(talk)), new QnIfContains("version", new QnVersion()), new QnIfContains("hello", new QnHello()), new QnIfCollaborator( new QnAlone( talk, locks, new Question.FirstOf( new Array( new QnIfContains( "merge", new QnAskedBy( profile, Agents.commanders("merge"), new QnMerge() ) ), new QnIfContains( "deploy", new QnAskedBy( profile, Agents.commanders("deploy"), new QnDeploy() ) ), new QnIfContains( "release", new QnAskedBy( profile, Agents.commanders("release"), new QnRelease() ) ) ) ) ) ) ) ) ) ) ) ), new StartsRequest(profile), new RegistersShell( "b1.rultor.com", 22, "rultor", IOUtils.toString( this.getClass().getResourceAsStream("rultor.key"), CharEncoding.UTF_8 ) ), new StartsDaemon(profile), new KillsDaemon(TimeUnit.HOURS.toMinutes(2L)), new EndsDaemon(), new EndsRequest(), new Tweets( this.github, new OAuthTwitter( Manifests.read("Rultor-TwitterKey"), Manifests.read("Rultor-TwitterSecret"), Manifests.read("Rultor-TwitterToken"), Manifests.read("Rultor-TwitterTokenSecret") ) ), new CommentsTag(this.github), new Reports(this.github), new RemovesShell(), new ArchivesDaemon( new ReRegion( new Region.Simple( Manifests.read("Rultor-S3Key"), Manifests.read("Rultor-S3Secret") ) ).bucket(Manifests.read("Rultor-S3Bucket")) ), new Publishes(profile) ) ); Impressive? This is a true object composition. I believe this is how a proper object-oriented application should be instantiated. And DI containers? In my opinion, they just add unnecessary noise. Related Posts You may also find these posts interesting:Getters/Setters. Evil. Period. Anti-Patterns in OOP Avoid String Concatenation Objects Should Be Immutable Why NULL is Bad?Reference: DI Containers are Code Polluters from our JCG partner Yegor Bugayenko at the About Programming blog....

JPA Tutorial: Mapping Entities – Part 2

In my last post I showed a simple way of persisting an entity. I explained the default approach that JPA uses to determine the default table for an entity. Let’s assume that we want to override this default name. We may like to do so because the data model has been designed and fixed before and the table names do not match with our class names (I have seen people to create tables with “tbl_” prefix, for example). So how should we override the default table names to match the existing data model? Turns out, it’s pretty simple. If we need to override the default table names assumed by JPA, then there are a couple of ways to do it:    We can use the name attribute of the @Entity annotation to provide an explicit entity name to match with the database table name. For our example we could have used @Entity(name = “tbl_address”) in our Address class if our table name was tbl_address. We can use a @Table (defined in the javax.persistence package) annotation just below the @Entity annotation and use its name attribute to specify the table name explicitly.@Entity @Table(name = "tbl_address") public class Address { // Rest of the class } From these two approaches the @Table annotation provides more options to customize the mapping. For example, some databases like PostgreSQL have a concept of schemas, using which you can further categorize/group your tables. Because of this feature you can create two tables with the same name in a single database (although they will belong to two different schemas). To access these tables you then add the schema name as the table prefix in your query. So if a PostgreSQL database has two different schemas named public (which is sort of like default schema for a PostgreSQL database) and document, and both of these schemas contain tables named document_collection, then both of these two queries are perfectly valid: -- fetch from the table under public schema SELECT * FROM public.document_collection;-- fetch from the table under document schema SELECT * FROM document.document_collection; In order to map an entity to the document_collection table in the document schema, you will then use the @Table annotation with its schema attribute set to document: @Entity @Table(name="document_collection", schema="document") public class DocumentCollection { // rest of the class } When specified this way, the schema name will be added as a prefix to the table name when the JPA goes to the database to access the table, just like we did in our queries. What if rather than specifying the schema name in the @Table annotation you append the schema name in the table name itself, like this: @Entity @Table(name = "document.document_collection") public class DocumentCollection { // rest of the class } Inlining the schema name with the table name this way is not guaranteed to work across all JPA implementations because support for this is not specified in the JPA specification (non-standard). So it’s better if you do not make a habit of doing this even if your persistence provider supports it. Let’s turn our attention to the columns next. In order to determine the default columns, JPA does something similar to the following:At first it checks to see if any explicit column mapping information is given. If no column mapping information is found, it tries to guess the default values for columns. To determine the default values, JPA needs to know the access type of the entity states i.e., the way to read/write the states of the entity. In JPA two different access types are possible – field and property. For our example we have used the field access (actually JPA assumed this from the location/placement of the @Id annotation,  but more on this later). If you use this access type then states will be written/read directly from the entity fields using the Reflection API. After the access type is known, JPA then tries to determine the column names. For field access type JPA directly treats the field name as the column names, which means if an entity has a field named status then it will be mapped to a column named status.At this point it should be clear to us how the states of the Address entities got saved into the corresponding columns. Each of the fields of the Address entity has an equivalent column in the database table tbl_address, so JPA directly saved them into their corresponding columns. The id field was saved into the id column, city field into the city column and so on. OK then, let’s move on to overriding column names. As far as I know there is only one way (if you happen to know of any other way please comment in!) to override the default column names for entity states, which is by using the @Column (defined in the javax.persistence package) annotation. So if the id column of the tbl_address table is renamed to be address_id then we could either change our field name to address_id, or we could use the @Column annotation with its name attribute set to address_id: @Entity @Table(name = "tbl_address") public class Address { @Id @GeneratedValue @Column(name = "address_id") private Integer id;// Rest of the class } You can see that for all the above cases the default approaches that JPA uses are quite sensible, and most of the cases you will be happy with it. However, changing the default values are also very easy and can be done very quickly. What if we have a field in the Address entity that we do not wish to save in the database? Suppose that the Address entity has a column named transientColumn which does not have any corresponding default column in the database table: @Entity @Table(name = "tbl_address") public class Address { @Id @GeneratedValue @Column(name = "address_id") private Integer id;private String street; private String city; private String province; private String country; private String postcode; private String transientColumn;// Rest of the class } If you compile your code with the above change then you will get an exception which looks something like below:Exception in thread “main” java.lang.ExceptionInInitializerError at com.keertimaan.javasamples.jpaexample.Main.main(Main.java:33) Caused by: javax.persistence.PersistenceException: Unable to build entity manager factory at org.hibernate.jpa.HibernatePersistenceProvider.createEntityManagerFactory(HibernatePersistenceProvider.java:83) at org.hibernate.ejb.HibernatePersistence.createEntityManagerFactory(HibernatePersistence.java:54) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:55) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:39) at com.keertimaan.javasamples.jpaexample.persistenceutil.PersistenceManager.<init>(PersistenceManager.java:31) at com.keertimaan.javasamples.jpaexample.persistenceutil.PersistenceManager.<clinit>(PersistenceManager.java:26) … 1 more Caused by: org.hibernate.HibernateException: Missing column: transientColumn in jpa_example.tbl_address at org.hibernate.mapping.Table.validateColumns(Table.java:365) at org.hibernate.cfg.Configuration.validateSchema(Configuration.java:1336) at org.hibernate.tool.hbm2ddl.SchemaValidator.validate(SchemaValidator.java:155) at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:525) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1857) at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl$4.perform(EntityManagerFactoryBuilderImpl.java:850) at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl$4.perform(EntityManagerFactoryBuilderImpl.java:843) at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.withTccl(ClassLoaderServiceImpl.java:398) at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:842) at org.hibernate.jpa.HibernatePersistenceProvider.createEntityManagerFactory(HibernatePersistenceProvider.java:75) … 6 moreThe exception is saying that the persistence provider could not find any column in the database whose name is transientColumn, and we did not do anything to make it clear to the persistence provider that we do not wish to save this field in the database. The persistence provider took it as any other fields in the entity which are mapped to database columns. In order to fix this problem, we can do any of the following:We can annotate the transientColumn field with the @Transient (defined in javax.persistence package) annotation to let the persistence provider know that we do not wish to save this field, and it does not have any corresponding column in the table. We can use the transient keyword that Java has by default.The difference between these two approaches that comes to my mind is that, if we use the transient keyword instead of the annotation, then if one of the Address entities gets serialized from one JVM to another then the transientColumn field will get reinitialized again (just like any other transient fields in Java). For the annotation, this will not happen and the transientColumn field will retain its value across the serialization. As a rule of thumb, I always use the annotation if I do not need to worry about serialization (and in most of the cases I don’t). So using the annotation, we can fix the problem right away: @Entity @Table(name = "tbl_address") public class Address { @Id @GeneratedValue @Column(name = "address_id") private Integer id;private String street; private String city; private String province; private String country; private String postcode;@Transient private String transientColumn;// Rest of the class } So that’s it for today folks. If you find any mistakes/have any input, please feel free to comment in! Until next time.Reference: JPA Tutorial: Mapping Entities – Part 2 from our JCG partner Sayem Ahmed at the Random Thoughts blog....

Logical vs physical clock optimistic locking

Introduction In my previous post I demonstrated why optimistic locking is the only viable solution for application-level transactions. Optimistic locking requires a version column that can be represented as:a physical clock (a timestamp value taken from the system clock) a logical clock (an incrementing numeric value)This article will demonstrate why logical clocks are better suited for optimistic locking mechanisms. System time The system time is provided by the operating system internal clocking algorithm. The programmable interval timer periodically sends an interrupt signal (with a frequency of 1.193182 MHz). The CPU receives the time interruption and increments a tick counter. Both Unix and Window record time as the number of ticks since a predefined absolute time reference (an epoch). The operating system clock resolution varies from 1ms (Android) to 100ns (Windows) and to 1ns (Unix). Monotonic time To order events, the version must advance monotonically. While incrementing a local counter is a monotonic function, system time might not always return monotonic timestamps. Java has two ways of fetching the current system time. You can either use:System#currentTimeMillis(), that gives you the number of milliseconds elapsed since Unix epochThis method doesn’t give you monotonic time results because it returns the wall clock time which is prone to both forward and backward adjustments (if NTP is used for system time synchronization).For monotonic currentTimeMillis, you can check Peter Lawrey’s solution or Bitronix Transaction Manager Monotonic Clock. System#nanoTime(), that returns the number of nanoseconds elapsed since an arbitrarily chosen time referenceThis method tries to use the current operating system monotonic clock implementation, but it falls back to wall clock time if no monotonic clock could be found. Argument 1: System time is not always monotonically incremented. Database timestamp precision The SQL-92 standard defines the TIMESTAMP data type as YYYY-MM-DD hh:mm:ss. The fraction part is optional and each database implements a specific timestamp data type:RDBMS Timestamp resolutionOracle TIMESTAMP(9) may use up to 9 fractional digits (nano second precision).MSSQL DATETIME2 has a precision of 100ns.MySQL MySQL 5.6.4 added microseconds precision support for TIME, DATETIME, and TIMESTAMP types (e.g. TIMESTAMP(6)). Previous MySQL versions discard the fractional part of all temporal types.PostgreSQL Both TIME and TIMESTAMP types have microsecond precision.DB2 TIMESTAMP(12) may use up to 12 fractional digits (picosecond precision).When it comes to persisting timestamps, most database servers offer at least 6 fractional digits. MySQL users have long been waiting for a more precise temporal type and the 5.6.4 version had finally added microsecond precision. On a pre-5.6.4 MySQL database server, updates might be lost during the lifespan of any given second. That’s because all transactions updating the same database row will see the same version timestamp (which points to the beginning of the current running second). Argument 2: Pre-5.6.4 MySQL versions only support second precision timestamps. Handling time is not that easy Incrementing a local version number is always safer because this operation doesn’t depends on any external factors. If the database row already contains a higher version number your data has become stale. It’s as simple as that. On the other hand, time is one of the most complicated dimension to deal with. If you don’t believe me, check the for daylight saving time handling considerations. It took 8 versions for Java to finally come up with a mature Date/Time API. Handling time across application layers (from JavaScript, to Java middle-ware to database date/time types) makes matters worse. Argument 3: Handling system time is a challenging job. You have to hanlde leap seconds, daylight saving, time zones and various time standards. Lessons from distributed computing Optimistic locking is all about event ordering, so naturally we’re only interested in the happened-before relationship. In distributed computing, logical clocks are favored over physical ones (system clock), because networks time synchronization implies variable latencies. Sequence number versioning is similar to Lamport timestamps algorithm, each event incrementing only one counter. While Lamport timestamps was defined for multiple distributed nodes event synchronization, database optimistic locking is much simpler, because there is only on node (the database server) where all transactions are synchronized (coming from concurrent client connections). Argument 4: Distributed computing favors logical clock over physical ones, because we are only interested in event ordering anyway. Conclusion Using physical time might seem convenient at first, but it turns out to be a naive solution. In a distributed environment, perfect system time synchronization is mostly unlikely. All in all, you should always prefer logical clocks when implementing an optimistic locking mechanism.Reference: Logical vs physical clock optimistic locking from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Six Tips for Interviewing Scrum Masters, Part 2

Now that you know what you expect from your Scrum Master’s job (the deliverables), and you know the essential and desirable skills (the first three tips), you can focus on creating the interview questions and audition. (If you have not yet read Six Tips for Interviewing Scrum Masters, Part 1 for the first three tips, please do so now.) Tip 4: Create Behavior-Description Questions for Your Scrum Master Based on Essential Qualities, Preferences, and Non-Technical Skills For initiative, you might ask behavior-description questions like these:Give me an example of a recent time you thought the team was stuck. How did you know the team was stuck and what did you do? (You want to know if the SM was command-and-control, interfering or helpful.) Tell me about a time when your Product Owner was convinced the story was as small as possible, but the story was a month long. Have you encountered something like this? (Maybe a month is longer than what the candidate encountered. Maybe it wasn’t the PO. Listen to their experience.) What happened? (Listen for what the SM did or did not do. Different Scrum Masters facilitate differently. There Is No Right Answer.) Tell me about a recent time the team had many stories in progress and didn’t finish at the end of a sprint. What happened? (Listen for what the SM did or did not do. Different Scrum Masters facilitate differently. There Is No Right Answer.)For flexibility, consider these questions:What do you consider negotiable in a Scrum team? Why? Give me a recent example of that flexibility. Give me an example of a time you and the team were stymied by something. What happened? Have you ever needed to compromise as a Scrum Master? Tell me about it.Again, you have to listen for the context and what the Scrum Master did in the different context for the project and the organization. There is no right answer. There are answers that don’t fit your context. Make sure you keep reading down to see my question about learning from past experiences. For perseverance, you might like these questions:Tell me about a time you advocated for something you wanted. Tell me about a work obstacle you had to overcome. Tell me about a time you had to maintain focus when it seemed everyone else was losing their focus.Do you see the pattern? Apply behavior-description questions to your essential qualities, preferences and non-technical skills. Tip 5: Create at Least One Audition Based on Deliverables Back in Six Tips for Interviewing Scrum Masters, Part 1, I said that you needed to define deliverables. I suggested a potential list of 10 deliverables. You might have these candidate auditions:Facilitate a standup Facilitate a retrospective Look at a Scrum board and tell you what it means Look at a team’s measurements and tell you what they meanI’m not saying these are the best auditions, because I don’t know what you need Your Scrum Master to do. These are candidate auditions. I have a lot more about how to create auditions in Hiring Geeks That Fit. Tip 6: Ignore Certifications and Look for the Growth Mindset I have never been a fan of certifications. If I have to choose between a candidate with a certification and a candidate with a growth mindset, I’ll select the candidate with the growth mindset. (Remember, you can buy a certification by taking a class. That’s it.) Certifications are good for learning. They are not good for helping people prove they have executed anything successfully. When you interview for the growth mindset, you ask behavior-description questions. When they answer in a way that intrigues you, you ask, “What did you learn from that?” (a reflective hypothetical question). Then ask, “How have you put that learning into practice?” (a behavior-description question). Now, you have yourself a terrific conversation, which is the basis for a great interview. Okay, there are my six tips for hiring a Scrum Master. If you want to understand how to hire without fear, read Hiring Geeks That Fit.Reference: Six Tips for Interviewing Scrum Masters, Part 2 from our JCG partner Johanna Rothman at the Managing Product Development blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: