Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Stop Chatting, Start Coding

The first principle of eXtremely Distributed Software Development (XDSD) states that “everyone gets paid for verified deliverables”. This literally means that, in order to get paid, every programmer has to write the code, commit it to the repository, pass a code review and make sure the code is merged into the destination branch. Only then, is his result appreciated and paid for.          For most of my clients this already sounds extreme. They are used to a traditional scheme of paying per hour or per month. They immediately realize the benefits of XDSD, though, because for them this approach means that project funds are not wasted on activities that don’t produce results. But that’s not all. This principle also means that nobody is paid for anything except tasks explicitly assigned to him/her. Thus, when a programmer has a question about current design, specification, configuration, etc. — nobody will be interested in answering it. Why not? Because there is no payment attached to this. Answering questions in Skype or Hipchat or by email is something that is not appreciated in XDSD in any way. The project simply doesn’t pay for this activity. That’s why none of our programmers do this. We don’t use any (I mean it!) informal communication channels in XDSD projects. We don’t do meetings or conference calls. We never discuss any technical issues on Skype or by phone. So, how do we resolve problems and share information? We use task tracking systems for that. When a developer has a question, he submits it as a new “ticket”. The project manager then picks it up and assigns it to another developer, who is able to answer it. Then, the answer goes back through the tracking system or directly into the source code. The “question ticket” gets closed when its author is satisfied with the answer. When the ticket is closed, those who answered it get paid. Using this model, we significantly improve project communications, by making them clean and transparent. We also save a lot of project funds, since every hour spent by a team member is traceable to the line of code he produced. You can see how this happens in action, for example, in this ticket (the project is open source; that’s why all communications are open): jcabi/jcabi-github#731. One Java developer is having a problem with his Git repository. Apparently he did something wrong and couldn’t solve the problem by himself. He asked for help by submitting a new bug to the project. He was paid for the bug report. Then, another team member was assigned to help him. He did, through a number of suggestions and instructions. In the end, the problem was solved, and he was also paid for the solution. In total, the project spent 45 minutes, and the problem was solved. Related Posts You may also find these posts interesting:Incremental Requirements With Requs How Hourly Rate Is Calculated How XDSD Is Different Github Guidelines Definition Of DoneReference: Stop Chatting, Start Coding from our JCG partner Yegor Bugayenko at the About Programming blog....
software-development-2-logo

Single Sign-On with the Delegated Access Control Pattern

Suppose a medium-scale enterprise has a limited number of RESTful APIs. Company employees are allowed to access these APIs via web applications while they’re behind the company firewall. All user data is stored in a Microsoft Active Directory, and all the web applications are connected to a Security Assertion Markup Language (SAML) 2.0 identity provider to authenticate users. The web applications need to access back-end APIs on behalf of the logged-in user.          The catch here is this last statement: “The web applications need to access back-end APIs on behalf of the logged-in user.” This suggests the need for an access-delegation protocol: OAuth. However, users don’t present their credentials directly to the web application—they authenticate through a SAML 2.0 identity provider. In this case, you need to find a way to exchange the SAML token received in the SAML 2.0 Web SSO protocol for an OAuth access token, which is defined in the SAML grant type for the OAuth 2.0 specification. Once the web application receives the SAML token, as shown in step 3 of the above figure, it has to exchange it with an access token by talking to the OAuth authorization server. The authorization server must trust the SAML 2.0 identity provider. Once the web application gets the access token, it can use it to access back-end APIs. The SAML grant type for OAuth doesn’t provide a refresh token. The lifetime of the access token issued by the OAuth authorization must match the lifetime of the SAML token used in the authorization grant. After the user logs in to the web application with a valid SAML token, the web app creates a session for the user from there onward, and it doesn’t worry about the lifetime of the SAML token. This can lead to some issues. Say the SAML token expires, but the user still has a valid browser session in the web application. Because the SAML token has expired, you can expect that the corresponding OAuth access token obtained at the time of user login has expired as well. Now, if the web application tries to access a back-end API, the request will be rejected because the access token is expired. In such a scenario, the web application has to redirect the user back to the SAML 2.0 identity provider, get a new SAML token, and exchange that token for a new access token. If the session at the SAML 2.0 identity provider is still live, then this redirection can be made transparent to the end user. This is one of the ten API security patterns covered in my book Advanced API Security. You can find more details about this from the book.Reference: Single Sign-On with the Delegated Access Control Pattern from our JCG partner Prabath Siriwardena at the Facile Login blog....
software-development-2-logo

Legacy Code To Testable Code #1: Renaming

This post is part of the “Legacy Code to Testable Code” series. In the series we’ll talk about making refactoring steps before writing tests for legacy code, and how they make our life easier. Renaming is easy and is usually safe. Most IDEs have the functionality, and most languages (I’m not talking about you, C++) lend themselves to safe renaming. Why start at renaming? It helps make the code more understandable. When we can understand the code better, the tests we write will be more effective. In other words: don’t write tests for code you don’t understand. Renaming is an easy win on the way there. Naming things is maybe the hardest thing in programming. Lots of code grows complex because we slap “Manager” as a class suffix. It’s like giving ourselves permission to make room for code we don’t know where to put. It doesn’t end with one class, though. Once there’s an AccountManager class, soon BankManager and CustomerManager will appear. The same goes for a getValidCustomer method that should really be a void method, but instead returning a success code. When we’re sloppy, we allow for generalized, confusing names to flourish in the code. When names are vague, all kinds of implementation pours in. It doesn’t matter if I wrote the code, or somebody who doesn’t work here anymore did. Legacy code can always do with improvement. One of our main goals in writing tests is to improve code. But improving code safely without effort is a bonus. Risk is key here. If the IDE can do the renaming safely, we are more likely to do it. If, on the other hand, we need to rely on manual changes, chances are we won’t. Mostly, when we’re doing pre-test renaming, we’ll concentrate more on method names, and maybe variables in the code. These are usually small enough for picking good names (and if not enough, can be extracted). Renaming classes is usually harder, because they generally cover more ground (remember how that happened, kids?). Renaming is a part of our familiarization with the code, before testing it. Even if I don’t know what to test, making the code readable helps me not only understand what it does, but also how to test it. Renaming variables We usually name by scope, if at all. Making the distinction helps in making sense. If there’s already a convention in the code (like m_ prefix for fields), make sure that the convention is followed in the code you’re observing. If there isn’t a convention, start one. Compare the type of the variable to the method and to their type. If it can be improved, rename it. For example: Acct a = BankManager.getAccount(); We can rename a to account, and the we wouldn’t need to remember what a is in the next 500 lines of our method. If the method returning the value seems confusing, its type can help you rename it. Don’t skimp on vowels! It’s starts as a clever way to save screen space, but after the vowels go, we think about other options, and soon we’re left with: acct. Not only less readable, but also annoying. Make the code readable. Apart from renaming, if you can tidy up the code, put the declarations into one area, the beginning of class or methods. If you find declaration spread around, clean it up. Renaming methods Method are harder to rename because they tend to do more, because of the aforementioned sloppiness. We usually start with a simple name, and then find the method the best place to do a couple things more . We soon have a big method, with a name reminding us of what the method was back then. This is confusing to the reader, and of course makes it harder to test. But we’re not there yet. For now, make the renaming simple: If the method returns something prefix it with get. If it does something, make sure it start with a verb like set or make or call. Check out the last lines of the method, or the exit points. Usually (not always), from the structure you’ll see what is the purpose of the function. Try to make the name fit the purpose. It may not be always the case though, so be careful. Don’t skimp on vowels! And don’t worry about the screen space. Make method names convey their purpose, and use the entire alphabet for it. Renaming classes These are the hard ones to rename, and I usually recommend not to (at least not until you cut them down to size). We only see the types at declaration or creation time, so renaming them won’t bring us much benefit in terms of understanding. It might be beneficial to identify a base or derived class in its name. Mostly it won’t, and lends itself to get nasty later, when adding a third hierarchy layer, for example.  I still like an I prefix to an interface, although it may get me killed in some communities. And always remember kids: Don’t skimp on vowels! Applies to classes too. Now that we’re done with renaming, it’s extraction time. Up next.Reference: Legacy Code To Testable Code #1: Renaming from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
coderpower-logo

Show your dev skills and win with CoderPower

CoderPower and Salesforce1 launch a series of coding challenges in the USA and in Europe. CoderPower is a platform where developers from all around the globe compete to get points and win prizes. After a summer competition with Salesforce1 and a constest with IBM in September, Salesforce1 launches a new series of challenges. The aim is simple: try to be the best developer at every coding challenge to win prizes. Each challenge has a time limit and the faster you solve the challenge, the more points you will get. At the end, Salesforce1 rewards the winner of each challenge and the overall best developer with cool prizes. A nice prizepool will reward the winners at the end of the challenge. In Europe, the overall winner will get a 3D Printer while people ranking 2nd and 3rd will receive a Samsung Gear 2. Also, the highest scorer of each challenge will get a Sphero.In the USA, the grand prize is an Xbox One with Kinect, while winners of challenges can win Quadcopter Drones, FitBits bracelets and Jambox Bluetooth Speakers. If you live in Europe, you may access the #Salesforce1Challenge at the following address: http://salesforcedevs.coderpower.com. Sign up before October 21st. If you live in the USA, login to salesforce.coderpower.com to try to win the challenges. The USA challenge will come to an end on November 10th. And finally, add the “JavaCodeGeeks” tag when you complete your profile on CoderPower. You will have a dedicated Leaderboard for JavaCodeGeeks readers, and the best developer will get a special prize: 1 Google Cardboard (see the JavaCodeGeeks leaderboard) ...
jboss-jbpm-logo

5 Handy Tips From JBoss BPM Suite For Release 6.0.3

Last week Red Hat released the next version of JBoss BPM Suite, labeled 6.0.3 and it is available in their Customer Portal for those with a subscription. If you are curious as to what is new in this release, see the release notes and the rest of the documentation online in the Customer Portal. What we are looking for are some easy ways to get started with this new release and this article has just the things you are looking for. We will give you a foolproof installation to get you started, then show you a complete HR employee rewards project you can experiment with, provide a completed HR employee rewards project for your evaluation needs, give you a list of more demo projects you might like to explore and finally provide you with an advanced integration example that ties together JBoss BPM Suite with JBoss Fuse.Tip #1 This is the first step in your journey, getting up and running without a lot of hassle or wasted time. This starts with the JBoss BPM Suite Install Demo which gets you started with a clean installation, a user to get started, and the product open in your browser for you to get started designing your rules, events and BPM projects. Tip #2 Now you have an installation, you are looking at the product, but what should you do next? No worries, we have a very nice hands on tutorial for you that shows you how to take your installation from zero to a full blown online HR Employee Rewards process project. You will work with the domain modeler, the form modeler and the process designer to realize this application. Check this online workshop here:Tip #3 Maybe you are just looking to evaluate or just play with the product we also have a fully finished HR employee rewards example for you to enjoy.This is the fully completed HR application with a domain model, human task forms and BPM process that you can also put together yourself (see Tip #2). This project also has links to more information and details around what it is made of and also provides a video walk through. Give it a go and install the HR Employee Rewards Demo project to evaluate JBoss BPM Suite today. Tip #4 There are many more demo projects you can look at to dig deeper into various aspects of the product, here is a list for your explorations:JBoss BPM Suite Mortgage demo (financial / web service ) JBoss BPM Suite Document Integration demo ( telco / ECM / CMIS) JBoss BPM Suite Governance demo (DTGov / S-RAMP) JBoss BPM Suite Generic Loan demo (financial / signals) JBoss BPM Suite Customer Evaluation demo (rules) JBoss BPM Suite & JBoss FSW Integration demoThese should keep you busy and let you dig into the various aspects of this product. Tip #5 The final step is to examine more advanced use cases like integration cases using JBoss Fuse.It just so happens we have an interesting project, JBoss BPM Suite & JBoss Fuse Integration Demo, that is just as easy to install as the previous ones and it demonstrates the integration of a Camel route with a BPM process. Also be sure to watch the video in the project documentation that walks you through all the details. We hope these handy tips are all you need to get started with JBoss BPM Suite’s new 6.0.3 release.Reference: 5 Handy Tips From JBoss BPM Suite For Release 6.0.3 from our JCG partner Eric Schabell at the Eric Schabell’s blog blog....
software-development-2-logo

WSO2 Identity Server 5.0.0 Authentication Framework

The WSO2 Identity Server 5.0.0 takes the identity management into a new direction. No more there will be federation silos or spaghetti identity anti-patterns. The authentication framework we introduced in IS 5.0.0 powers this all. The objective of this blog post is to introduce high-level concepts associated with the authentication framework. Inbound Authenticators The responsibility of inbound authenticators is to identify and parse all the incoming authentication requests and then build the corresponding response.  A given inbound authenticator has two parts.Request Processor Response BuilderFor each protocol supported by WSO2 IS – there should be an inbound authenticator. Out-of-the-box it comes with inbound authenticators for SAML 2.0, OpenID, OpenID Connect, OAuth 2.0, and WS-Federation (passive). In other words, the responsibility of the SAML 2.0 request processor is to accept a SAML request from a service provider, validate the SAML request and then build a common object model understood by the authentication framework and handover the request to it. The responsibility of the SAML response builder is to accept a common object model from the authentication framework and build a SAML response out of it. Both the request processors and the response builders are protocol aware while the authentication framework is not coupled to any protocol. Local Authenticators The responsibility of the local authenticators is to authenticate the user with locally available credentials. This can be either username / password or even IWA (Integrated Windows Authentication). Local authenticators are decoupled from the Inbound Authenticators. Once the initial request is handed over to the authentication framework from an inbound authenticator, the authentication framework talks to the service provider configuration component to find the set of local authenticators registered with the service provider corresponding to the current authentication request. Once the local authentication is successfully completed, the local authenticator will notify the framework. The framework will now decide no more authentication is needed and hand over the control to the corresponding response builder of the inbound authenticator. You can develop your own local authenticators and plug them into the Identity Server. Outbound / Federated Authenticators The responsibility of the federated authenticators is to authenticate the user with an external system. This can be with Facebook, Google, Yahoo, LinkedIn, Twitter, Salesforce or any other identity provider. Federated authenticators are decoupled from the Inbound Authenticators. Once the initial request is handed over to the authentication framework from an inbound authenticator, the authentication framework talks to the service provider configuration component to find the set of federated authenticators registered with the service provider corresponding to the current authentication request. A federated authenticator has no value unless its associated with an identity provider. For example IS 5.0.0 out-of-the-box supports SAML 2.0, OpenID, OpenID Connect, OAuth 2.0 and WS-Federation (passive). The SAML 2 .0 federated authenticator itself has no value. It has to be associated with an Identity Provider. Google Apps can be an identity provider – with the SAML 2.0 federated authenticator. This federated authenticator knows how to generate a SAML request to the Google Apps and process a SAML response from it. There are two parts in a federated authenticator.Request Builder Response ProcessorOnce the federation authentication is successfully completed, the federated authenticator will notify the authentication framework. The framework will now decide no more authentication is needed and hand over the control to the corresponding response builder of the inbound authenticator. Both the request builder and the response processor are protocol aware while the authentication framework is not coupled to any protocol. You can develop your own federated authenticators and plug them into the Identity Server. Request-path Authenticators This is a special type of authenticator. Request-path authenticator is always a local authenticator. Once the initial request is handed over to the authentication framework from an inbound authenticator, the authentication framework talks to the service provider configuration component to find the set of request-path  authenticators registered with the service provider corresponding to the current authentication request. Then the framework will check whether there is any request-path authenticator applicable for the initial authentication request. In other words, a request path authenticator will get executed only if the initial authentication request brings the applicable set of credentials with it. The request-path authenticators always require the user credentials to be present in the initial authentication request itself. This does not need any end-user interactions with the Identity Server. Once the request-path authentication is successfully completed, the request-path authenticator will notify the authentication framework. The framework will now decide no more authentication is needed and hand over the control to the corresponding response builder of the inbound authenticator. Identity Provider Configuration The responsibility of the Identity Provider configuration in IS 5.0.0 is to represent external Identity Providers. These external identity providers can be Facebook, Yahoo, Google, Salesforce, Microsoft Windows Live or whoever. If we want to authenticate users against these identity providers, then we must associate one or more Federated Authenticators. For example, if we want to authenticate users against Google Apps, then we need to associate SAML 2.0 authenticator with Salesforce identity provider. If we want to authenticate users against Yahoo – then we need to associate OpenID authenticator with it. To make this process much easier Identity Server also comes with a set of more specific federated authenticators as well. For example, if you want authenticate against Facebook, you do not need to configure OAuth 2.0 authenticator – instead you can directly use the Facebook federated authenticator.Each identity provider configuration can also maintain a claim mapping. This is to map its own set of claims to the identity server’s claims. When the response from an external identity provider is received by the response processor component of the federated authenticator – before it hands over the control to the authentication framework, the response processor will create a name/value pair of user claims received in the response from the identity provider. These claims are specific to the external identity provider. Then its the responsibility of the authentication framework to read the claim mapping configuration from the identity provider component and do the conversion. So, while inside the framework, all the user claim values will be in a common format.Service Provider Configuration The responsibility of the Service Provider configuration is to represent external service providers. These external service providers can be a web application, a mobile application, a liferay portal, Salesforce (Salesforce can be both a service provider and an identity provider), Google Apps (Google Apps can be both a service provider and an identity provider) and many more. In the service provider configuration you define how the service provider talks to the identity server – this is via inbound authenticators. When you register a service provider you need to associate one or more inbound authenticators with it.The service provider configuration also defines how to authenticate users. This can be via a local authenticator, request-path authenticator or federated authenticator. Based on this configuration, Identity Server knows when it receives an authentication request (via an inbound authenticator) how to authenticate the user based on the service provider who initiates it.Each service provider configuration can also maintain a claim mapping. This is to map its own set of claims to the identity server’s claims. When the authentication framework hands-over a set of claims (which it gets from the local user store or from an external identity provider) to the response builder of the inbound authenticator, the framework will talk to the service provider configuration component, find the claim mapping and do the claim conversion. Now the response builder will receive the claims in a manner understood by the corresponding service provider.Multi-option Authentication The service provider can define how to authenticate users at the Identity Server, for authentication requests initiated by it. While doing that, each service provider can pick more than one authenticators – so, the end user will get multiple login options. This can be a combination of local authenticators and federated authenticators. Multi-level (multi-factor) Authentication The service provider can define how to authenticate users at the Identity Server, for authentication requests initiated by it. While doing that, each service provider can define multiple steps and for each step it can pick more than one authenticators.The authentication framework will track all the authenticators in each step and will proceed to the next step only if the user authenticates successfully in the current step. Its an AND between steps while its an OR between the authenticators in a given step.Reference: WSO2 Identity Server 5.0.0 Authentication Framework from our JCG partner Prabath Siriwardena at the Facile Login blog....
akka-logo

Akka Notes – Actor Messaging – Request and Response – 3

Last time when we saw Actor messaging, we saw how fire-n-forget messages are sent (Meaning, we just send a message to the Actor but don’t expect a response from the Actor). Technically, we fire messages to Actors for its side-effects ALL THE TIME. It is by design. Other than not responding, the target Actor could ALSO do the following with that message:        Send a response back to the sender (in our case, the TeacherActor would respond with a quote back to the StudentActor OR Forward a response back to some other Actor who might be the intended audience which in turn might respond/forward/have a side-effect. Routers and Supervisors are examples of those cases (we’ll look at them very soon).Request & Response In this write-up, we’ll be focussing only on Point 1 – the request-response cycle.The picture conveys what we are trying to achieve this time. For sake of brevity, I didn’t represent the ActorSystem, Dispatcher or Mailboxes in the picture.The DriverApp sends an InitSignal message to the StudentActor. The StudentActor reacts to the InitSignal message and sends a QuoteRequest message to the TeacherActor. The TeacherActor, like we saw in the first discussion, responds with a QuoteResponse. The StudentActor just logs the QuoteResponse to the console/logger.We’ll also cook up a testcase to verify it. Let’s look at these 4 points in detail now : 1. The DriverApp sends an InitSignal message to the StudentActorBy now, you would have guessed what would the DriverApp do. Just 4 things :Initialize the ActorSystem //Initialize the ActorSystem val system = ActorSystem("UniversityMessageSystem") Create the TeacherActor //create the teacher actor val teacherRef = system.actorOf(Props[TeacherActor], "teacherActor") Create the StudentActor //create the Student Actor - pass the teacher actorref as a constructor parameter to StudentActor val studentRef = system.actorOf(Props(new StudentActor(teacherRef)), "studentActor") You’ll notice that I am passing in the ActorRef of the TeacherActor to the constructor of the StudentActor so that the StudentActor could use the ActorRef for sending messages to the TeacherActor. There are other ways to achieve this (like passing in the Props) but this method would come in handy when we look at Supervisors and Routers in the following write-ups. We’ll also be looking at child actors pretty soon but that wouldn’t semantically be the right approach here – Student creating Teacher doesn’t sound nice. Does it? Lastly, The DriverApp would then send an InitSignal to the StudentActor, so that the StudentActor could start sending the QuoteRequest message to the TeacherActor.//send a message to the Student Actor studentRef ! InitSignal That’s pretty much the DriverClass. The Thread.sleep and the ActorSystem.shutdown are just to wait for a couple of seconds for the message sending to finish before we finally shut down the ActorSystem. DriverApp.scala package me.rerun.akkanotes.messaging.requestresponseimport akka.actor.ActorSystem import akka.actor.Props import me.rerun.akkanotes.messaging.protocols.StudentProtocol._ import akka.actor.ActorRefobject DriverApp extends App {//Initialize the ActorSystem val system = ActorSystem("UniversityMessageSystem")//construct the teacher actor val teacherRef = system.actorOf(Props[TeacherActor], "teacherActor")//construct the Student Actor - pass the teacher actorref as a constructor parameter to StudentActor val studentRef = system.actorOf(Props(new StudentActor(teacherRef)), "studentActor")//send a message to the Student Actor studentRef ! InitSignal//Let's wait for a couple of seconds before we shut down the system Thread.sleep(2000)//Shut down the ActorSystem. system.shutdown()} 2. The StudentActor reacts to the InitSignal message and sends a QuoteRequest message to the TeacherActor AND 4. The StudentActor receives the QuoteResponse from TeacherActor and just logs to the console/logger Why did I combine Points 2 and 4? Because it is so simple you’ll hate me if I separate them.So, Point 2 – the StudentActor receives the InitSignal message from the DriverApp and sends QuoteRequest to the TeacherActor. def receive = { case InitSignal=> { teacherActorRef!QuoteRequest } ... ... That’s it !!! Point 4 – The StudentActor logs the message that it receives from the TeacherActor.Just, as promised : case QuoteResponse(quoteString) => { log.info ("Received QuoteResponse from Teacher") log.info(s"Printing from Student Actor $quoteString") } I am sure you’d agree that it almost looks like pseudocode now. So, the entire StudentActor class looks like : StudentActor.scala package me.rerun.akkanotes.messaging.requestresponseimport akka.actor.Actor import akka.actor.ActorLogging import me.rerun.akkanotes.messaging.protocols.TeacherProtocol._ import me.rerun.akkanotes.messaging.protocols.StudentProtocol._ import akka.actor.Props import akka.actor.ActorRefclass StudentActor (teacherActorRef:ActorRef) extends Actor with ActorLogging {def receive = { case InitSignal=> { teacherActorRef!QuoteRequest }case QuoteResponse(quoteString) => { log.info ("Received QuoteResponse from Teacher") log.info(s"Printing from Student Actor $quoteString") } } } 3. The TeacherActor responds with a QuoteResponse. This is the exact same code as we saw in the fire-n-forget write-up. The TeacherActor receives a QuoteRequest message and sends QuoteResponse back. TeacherActor.scala package me.rerun.akkanotes.messaging.requestresponseimport scala.util.Randomimport akka.actor.Actor import akka.actor.ActorLogging import akka.actor.actorRef2Scala import me.rerun.akkanotes.messaging.protocols.TeacherProtocol._class TeacherActor extends Actor with ActorLogging {val quotes = List( "Moderation is for cowards", "Anything worth doing is worth overdoing", "The trouble is you think you have time", "You never gonna know if you never even try")def receive = {case QuoteRequest => {import util.Random//Get a random Quote from the list and construct a response val quoteResponse = QuoteResponse(quotes(Random.nextInt(quotes.size)))//respond back to the Student who is the original sender of QuoteRequest sender ! quoteResponse} } } Testcases Now, our testcase would simulate the DriverApp. Since, the StudentActor just logs the message and we won’t be able to assert on the QuoteResponse itself, we’ll just assert the presence of the log message in the EventStream (just like we talked last time). So, our testcase looks like : "A student" must {"log a QuoteResponse eventually when an InitSignal is sent to it" in {import me.rerun.akkanotes.messaging.protocols.StudentProtocol._val teacherRef = system.actorOf(Props[TeacherActor], "teacherActor") val studentRef = system.actorOf(Props(new StudentActor(teacherRef)), "studentActor")EventFilter.info (start="Printing from Student Actor", occurrences=1).intercept{ studentRef!InitSignal } } } Code The entire project could be downloaded from github here. Next up, we’ll see how to use schedulers in Akka and monitoring your Akka app using Kamon.Reference: Akka Notes – Actor Messaging – Request and Response – 3 from our JCG partner Arun Manivannan at the Rerun.me blog....
agile-logo

Project Lifecycle in Teamed.io

In addition to being a hands-on programmer, I’m also co-founder and CTO of Teamed.io, a custom software development company. I play the role of a technical and management leader in all projects we work with. I wrote this article for those who’re interested in hiring me and/or my team. This article will demonstrate what happens from day one until the end of the project, when you choose to work with us. You will see below that our methods of software development seriously differ from what many other teams are using. I personally pay a lot of attention to quality of code and quality of the internal processes that connect our team. There are four phases in every project I work with in Teamed.io:Thinking. Here we’re trying to understand: What is the problem that the product is going to solve? We’re also investigating the product’s boundaries — who will work with the software (actors) and how will they work with it (user stories). Deliverables: specification. Duration: from 2 days up to 3 weeks. Participants: product owner, analyst(s), architect, project manager. Building. Here the software architect is creating a proof-of-concept (aka an MVP or prototype or a skeleton). It is a one-man job that is done almost without any interaction with anyone else. The architect builds the product according to the specification in a very limited time frame. The result will have multiple bugs and open ends, but it will implement the main user story. The architect also configures continuous integration and delivery pipelines. Deliverables: working software. Duration: 2-5 days. Participants: architect. Fixing. At this phase we are adding all the meat to the skeleton. This phase takes most of the time and budget and involves many participants. In some projects we invite up to 50 people to work, at the same time. Since we treat all inconsistencies as bugs, this phase is mostly about finding, reporting and fixing bugs, in order to stabilize the product and get it ready for market launch. We increment and release the software multiple times a day, preferably to its user champions. Deliverables: bug fixes via pull requests. Duration: from weeks to months. Participants: programmer(s), designer(s), tester(s), code reviewer(s), architect, project manager. Using. At this final phase we are launching the product to its end-users, and collecting their feedback (both positive and negative). Everything they are reporting back to us is being registered as a bug. Then, we categorize the bugs and fix them. This phase may take years, but it never involves active implementation of new features. Deliverables: bug fixes via pull requests. Duration: months. Participants: programmer(s), code reviewer(s), project manager.The biggest (i.e., longest and most expensive) phase is, of course, Fixing. It usually takes the majority of time (over 70%). However, the most important and risky phase is the first one — Thinking. A mistake made during Thinking will cost much more than a mistake made later. Thinking Thiking is the first and the most important phase. First, we give a name to the project and create a Github repository. We try to keep all our projects (both open source and commercial) in Github. Mostly because the platform is very popular, very powerful, and really cheap ($7/mo for a set of 5 private projects). We also keep all communication in the Github issue tracker. Then, we create a simple half-page SRS document (Software Requirements Specification). Usually this is done right inside the source code, but sometimes in the Github wiki. What’s important is that the document should be under version control. We will modify it during the course of the project, very intensively. The SRS should briefly identify main “actors” of the system and define the product scope. Even though it is only half a page, the creation of this initial SRS document is the most important and the most expensive task in the entire project. We pay a lot of attention to this step. Usually this document is written by myself in a direct communication with the project sponsor. We can’t afford a mistake at this step. Then, we invite a few system analysts to the project. These guys are responsible for turning our initial SRS into a more complete and detailed specification. They start by asking questions, submitting them one by one as Github issues. Every question is addressed to the product owner. Using his answers, system analysts modify the SRS document. This article explains how Requs helps us in this process: Incremental Requirements With Requs At the end of the Thinking phase we estimate the size of the project, in lines of code. Using lines of code, we can roughly estimate a budget. I stay personally involved in the project during the entire Thinking phase. Building This is a one-man job for an architect. Every project we work with has an architect who is personally responsible for the quality and all technical decisions made there. I try to play this role in most projects. The Building phase is rather straight forward. I have to implement the solution according to the SRS, in a few working days. No matter how big the idea and how massive the planning development, I still have to create (build from scratch!) the product in, say, three days. Besides building the software itself, I have to configure all basic DevOps processes, including: 1) automated testing and quality control, 2) deploying and releasing pipelines, 3) repository of artifacts, 4) continuous integration service, etc. The result of this phase is a working software package, deployable to its destination and available for testers. Technical quality requirements are also defined at this phase. Fixing Now it’s time to build a distributed team of programmers. First, we invite those who’ve worked in other projects before and have already have proven their quality. Very often we invite new people, finding them through StackOverflow, Github, oDesk, and other sources. An average team size of an average project is 10-20 programmers. At this phase, we understand any inconsistency as a bug. If something is not clear in the documentation, or if something can be refactored for better readability, or if a function can be improved for higher performance — it is a bug to us. And bugs are welcome in our projects. We encourage everybody to report as many bugs as possible. This is how we achieve high quality. That is why the phase is called Fixing, after all. We are reporting bugs and fixing them. Hundreds of bugs. Sometimes thousands. The product grows in front of our very eyes, because after every bug fix we re-deploy the entire product to the production platform. Every bug is reported, classified, discussed, and fixed in its own Github ticket and its own Git branch. We never allow anyone to just commit to the master branch — all changes must pass through our quality controls and be merged into master by rultor.com, our merging bot. Also important to mention is that all communications with the product owner and between programmers happen only through Github issues. We never use any chats, Skype, emails or conferencing software. We communicate only through tickets and comments in Github. Using This is the final phase and it can take quite a long time. By now, the product is ready and is launched to the market. But we still receive bug reports and feature request from the product owner, and we still fix them through the same process flow as in the Fixing phase. We try to keep this phase as quiet as possible, in terms of the amount of bugs reported and fixed. Thanks to our intensive and pro-active bug finding and fixing in the previous phase, we usually have very few problems at the Using phase. And big feature requests? At this phase, we usually try to convert them into new projects and develop them separately, starting again from Thinking.Reference: Project Lifecycle in Teamed.io from our JCG partner Yegor Bugayenko at the About Programming blog....
android-logo

Control Sphero using Temperature Sensor in Android

One of the most interesting topic around Android it is how we can connect our smart phones to other devices or smart devices to get information from them or to control them. In this post I want to introduce a new way to use our Android phone and explain how we can use it to control Sphero ball. Introduction In this Android project, I want to describe how we can integrate the temperature sensor inside the smart phone to control the Sphero ball color. In other words, I want to change the color ball according to the temperature measured by the smart phone even if the smart phones is in the stand by mode or the Activity is not in the foreground. This is an interesting project because it can be used to describe some important concept:Android Service Android Broadcast receiver Sensors Alarm Managerand finally, but not less important, how to connect and use Sphero ball with its SDK. What we want to design an app like shown below:Design the app Now we know what we want to obtain, we can mix Android features and components to get it. We need a component that monitors the temperature sensor and another one that connect to the Sphero. As said before, we want to make this component working even if the app is not in the foreground or the smart phone is not active. So we need a Service, because this Android component can fulfill our requirements. We need, then,An Activity that is the app UI A Service that monitors the temperature sensor A Service that connects to the ball and control its colorLooking at the pic below, we can notice that the UI Activity starts two services and listens for events coming from these services, in more details the Activity set up an alarm that is used to start the Temperature Sensor Service so that we won’t drain the battery. The alarm can be configured to start every fixed amount of time. Every time the Temperature Sensor starts it measure the environment temperature using the smart phone sensor and broad cast the value. The UI Activity listens these events and shows the value to the UI, at the same time the Ball Connection Service listens for the same event and as soon as it gets the event, this service calculates the color components (R,G,B) and set the ball color. Create Temperature Sensor Service: code Now we have an overview about the main components in our app, we can start coding it. The first element we want to code is the Temperature Sensor service that reads the current temperature. As we know we need a service: public class SensorService extends Service implements SensorEventListener { ... } we must implement SensorEventListener to listen to the sensor events, then in onStartCommand we register this class as listener: @Override public int onStartCommand(Intent intent, int flags, int startId) { sManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE); sensor = sManager.getDefaultSensor(Sensor.TYPE_AMBIENT_TEMPERATURE); sManager.registerListener(this, sensor, SensorManager.SENSOR_DELAY_NORMAL); return Service.START_STICKY; } finally when we get notified about the new temperature value we handle it: @Override public void onSensorChanged(SensorEvent event) { // We get the temperature and notify it to the Activity float temp = event.values[0]; Intent i = new Intent(); i.setAction(TEMP_BALL_SENSOR); i.putExtra("value", temp); sendBroadcast(i);// stop listener if (sManager != null) sManager.unregisterListener(this);// stop service stopSelf(); } At line 15,we stop the service because we don’t want to read all the times the values to not drain the battery. Create Ball Connection Service: code The other service we have to implement is to handle Sphero connection via bluetooth. You can refer to Sphero SDK to have more information. We want to handle the connection in the Android service: public class BallConnectionService extends Service { .. } now in the onStartCommand we start connecting to the Sphero and at the same time we start listening for incoming temperature event (line 8). @Override public int onStartCommand(Intent intent, int flags, int startId) { if (mySphero == null) doConnection();IntentFilter rec = new IntentFilter(); rec.addAction(SensorService.TEMP_BALL_SENSOR); registerReceiver(receiver, rec);return Service.START_STICKY; } in doConnection we make the real connection: private void doConnection() {sendStatus(CONNECTING); createNotification("Connecting...");RobotProvider.getDefaultProvider().addConnectionListener(new ConnectionListener() { @Override public void onConnected(Robot robot) { Log.d("Temp", "Connected"); mySphero = (Sphero) robot; sendStatus(CONNECTED); createNotification("Connected"); }@Override public void onConnectionFailed(Robot robot) { Log.d("Temp", "Conection failed"); sendStatus(FAILED); }@Override public void onDisconnected(Robot robot) { Log.d("Temp", "Disconnected"); mySphero = null; createNotification("Disconnected!"); } });RobotProvider.getDefaultProvider().addDiscoveryListener(new DiscoveryListener() { @Override public void onBluetoothDisabled() { Log.d("Temp", "BT Disabled"); }@Override public void discoveryComplete(List<Sphero> spheros) { Log.d("Temp", "Found ["+spheros.size()+"]"); }@Override public void onFound(List<Sphero> spheros) { // Do connection Log.d("Temp", "Found ball"); RobotProvider.getDefaultProvider().connect(spheros.get(0)); } });boolean success = RobotProvider.getDefaultProvider().startDiscovery(this); } The code seems complex but it is really simple if you look at it carefully. We start broadcasting the event that we are trying to connect to the Sphero (line 3), then, using Sphere API, we register a listener to know when the connection is established and broadcast a new event that the connection is active, at the end of this method we start discovering if new Sphero in around and ready to connect. The last part of the service is used for listening to the temperature event and set the color ball: private BroadcastReceiver receiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { float val = intent.getFloatExtra("value", 0); Log.d("Temp", "Received value ["+val+"]"); if (mySphero != null) { // send color to spheroint red = (int) (255 * val / RANGE) * (val > 10 ? 1 : 0); int green = (int) ( (255 * (RANGE - Math.abs(val)) / RANGE) * (val < 10 ? 0.2 : 1) ); int blue = (int) (255 * (10 - val) / 10) * (val < 10 ? 1 : 0);mySphero.setColor(red, green, blue); } } ;  Create the Activity The last step is creating the Activity that controls the UI and starts and stops the service. We provide two action bar buttons: one to start the services and another one to stop them. If we touch the start service, we start the AlarmManager to schedule when to run our service: PendingIntent pi = createAlarm(); AlarmManager scheduler = (AlarmManager) getSystemService(Context.ALARM_SERVICE); scheduler.setInexactRepeating(AlarmManager.RTC_WAKEUP, System.currentTimeMillis(), 60 * 1000, pi); Intent i1 = new Intent(this, BallConnectionService.class); startService(i1); In this simple code, we create a PendingIntent and get a reference to the AlarmManager, finally we schedule the alarm so that the service can be started after a fixed amount of time. (line 3). In createAlarm() method we setup the intent: private PendingIntent createAlarm() { AlarmManager scheduler = (AlarmManager) getSystemService(Context.ALARM_SERVICE); Intent intent = new Intent(this,SensorService.class ); PendingIntent scheduledIntent = PendingIntent.getService(getApplicationContext(), 0, intent, PendingIntent.FLAG_UPDATE_CURRENT); return scheduledIntent; } Finally, we have to create two receivers that listen for event coming from temperature sensor and ball connection services: private BroadcastReceiver sensorReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { float val = intent.getFloatExtra("value", 0); tempView.setText(String.format("%.1f",val)); } }; At line 5, we show the current temperature, while for the ball service we have: private BroadcastReceiver ballReceiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { int status = intent.getIntExtra("status", -1000);Log.d("Temp", "Value Status ["+status+"]"); if (status == BallConnectionService.CONNECTING) { tempView.startAnimation(pulseAnim); Toast.makeText(MyActivity.this, "Connecting...", Toast.LENGTH_SHORT).show(); } else if (status == BallConnectionService.CONNECTED) { tempView.clearAnimation(); Intent i = new Intent(MyActivity.this, SensorService.class); startService(i); Toast.makeText(MyActivity.this, "Connected", Toast.LENGTH_LONG).show(); } else if (status == BallConnectionService.FAILED) { Toast.makeText(MyActivity.this, "Connection failed. Try again pressing start button", Toast.LENGTH_LONG).show(); }} };Source code available soon @ github.Reference: Control Sphero using Temperature Sensor in Android from our JCG partner Francesco Azzola at the Surviving w/ Android blog....
spring-interview-questions-answers

Stateless Spring Security Part 1: Stateless CSRF protection

Today with a RESTful architecture becoming more and more standard it might be worthwhile to spend some time rethinking your current security approaches. Within this small series of blog posts we’ll explore a few relatively new ways of solving web related security issues in a Stateless way. This first entry is about protecting your website against Cross-Site Request Forgery (CSRF). Recap: What is Cross-Site Request Forgery? CSRF attacks are based on lingering authentication cookies. After being logged in or otherwise identified as a unique visitor on a site, that site is likely to leave a cookie within the browser. Without explicitly logging out or otherwise removing this cookie, it is likely to remain valid for some time. Another site can abuse this by having the browser make (Cross-Site) requests to the site under attack. For example including some javascript to make a POST to “http://siteunderattack.com/changepassword?pw=hacked” tag will have the browser make that request, attaching any (authentication) cookies still active for that domain to the request! Even though the Single-Origin Policy (SOP) does not allow the malicious site access to any part of the response. As probably clear from the example above, the harm is already be done if the requested URL triggers any side-effects (state changes) in the background.  Common approach The commonly used solution would be to introduce the requirement of a so-called shared secret CSRF-token and make it known to the client as part of a previous response. The client is then required to ping it back to the server for any requests with side-effects. This can be done either directly within a form as hidden field or as a custom HTTP header. Either way other sites cannot successfully produce requests with the correct CSRF-token included, because SOP prevents responses from the server from being read cross-site. The issue with this approach is that the server needs to remember the value of each CSRF-token for each user inside a session. Stateless approaches 1. Switch to a full and properly designed JSON based REST API. Single-Origin Policy only allows cross-site HEAD/GET and POSTs. POSTs may only be one of the following mime-types: application/x-www-form-urlencoded, multipart/form-data, or text/plain. Indeed no JSON! Now considering GETs should never ever trigger side-effects in any properly designed HTTP based API, this leaves it up to you to simply disallow any non-JSON POST/PUT/DELETEs and all is well. For a scenario with uploading files (multipart/form-data) explicit CSRF protection is still needed. 2. Check the HTTP Referer header. The approach from above could be further refined by checking for the presence and content of a Referer header for scenarios that are still susceptible, such as multipart/form-data POSTs. This header is used by browsers to designate which exact page (url) triggered a request. This could easily be used to check against the expected domain for the site. Note that if opting for such a check you should never allow requests without the header present. 3. Client-side generated CSRF-tokens. Have the clients generate and send the same unique secret value in both a Cookie and a custom HTTP header. Considering a website is only allowed to read/write a Cookie for its own domain, only the real site can send the same value in both headers. Using this approach all you server has to do is check if both values are equal, on a stateless per request basis! Implementation Focussing on the 3rd approach for explicit but Stateless CSRF-token based security, lets see how this looks like in code using Spring Boot and Spring Security. Within Spring Boot you get some nice default security settings which you can fine tune using your own configuration adapter. In this case all that is needed is to disable the default csrf behavior and add own own StatelessCSRFFilter: Customize csrf protection @EnableWebSecurity @Order(1) public class StatelessCSRFSecurityConfig extends WebSecurityConfigurerAdapter {@Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable().addFilterBefore( new StatelessCSRFFilter(), CsrfFilter.class); } } And here is the implementation of the StatelessCSRFFilter: Custom CSRF filter public class StatelessCSRFFilter extends OncePerRequestFilter {private static final String CSRF_TOKEN = "CSRF-TOKEN"; private static final String X_CSRF_TOKEN = "X-CSRF-TOKEN"; private final RequestMatcher requireCsrfProtectionMatcher = new DefaultRequiresCsrfMatcher(); private final AccessDeniedHandler accessDeniedHandler = new AccessDeniedHandlerImpl();@Override protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { if (requireCsrfProtectionMatcher.matches(request)) { final String csrfTokenValue = request.getHeader(X_CSRF_TOKEN); final Cookie[] cookies = request.getCookies();String csrfCookieValue = null; if (cookies != null) { for (Cookie cookie : cookies) { if (cookie.getName().equals(CSRF_TOKEN)) { csrfCookieValue = cookie.getValue(); } } }if (csrfTokenValue == null || !csrfTokenValue.equals(csrfCookieValue)) { accessDeniedHandler.handle(request, response, new AccessDeniedException( "Missing or non-matching CSRF-token")); return; } } filterChain.doFilter(request, response); }public static final class DefaultRequiresCsrfMatcher implements RequestMatcher { private final Pattern allowedMethods = Pattern.compile("^(GET|HEAD|TRACE|OPTIONS)$");@Override public boolean matches(HttpServletRequest request) { return !allowedMethods.matcher(request.getMethod()).matches(); } } } As expected the Stateless version doesn’t do much more than a simple equals() on both header values. Client-side implementation Client-side implementation is trivial as well, especially when using AngularJS. AngularJS already comes with build-in CSRF-token support. If you tell it what cookie to read from, it will automatically put and send its value into a custom header of your choosing. (The browser taking care of sending the cookie header itself.) You can override AngularJS’s default names (XSRF instead of CSRF) for these as followed: Set proper token names $http.defaults.xsrfHeaderName = 'X-CSRF-TOKEN'; $http.defaults.xsrfCookieName = 'CSRF-TOKEN'; Furthermore if you want to generate a new token value per request you could add a custom interceptor to the $httpProvider as followed: Interceptor to generate cookie app.config(['$httpProvider', function($httpProvider) { //fancy random token, losely after https://gist.github.com/jed/982883 function b(a){return a?(a^Math.random()*16>>a/4).toString(16):([1e16]+1e16).replace(/[01]/g,b)};$httpProvider.interceptors.push(function() { return { 'request': function(response) { // put a new random secret into our CSRF-TOKEN Cookie before each request document.cookie = 'CSRF-TOKEN=' + b(); return response; } }; }); }]); You can find a complete working example to play with at github. Make sure you have gradle 2.0 installed and simply run it using “gradle build” followed by a “gradle run”. If you want to play with it in your IDE like eclipse, go with “gradle eclipse” and just import and run it from within your IDE (no server needed). Disclaimer Sometimes the classical CSRF-tokens are wrongfully deemed a solution against replay or brute-force attacks. The stateless approaches listed here does not cover this type of attack. Personally I feel both type of attacks should be covered at another level, such as using https and rate-limiting. Which I both consider a must for any data-entry on a public web site!Reference: Stateless Spring Security Part 1: Stateless CSRF protection from our JCG partner Robbert van Waveren at the JDriven blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close