Featured FREE Whitepapers

What's New Here?

devops-logo

SSH Tunneling Explained

Recently I wanted to set up a remote desktop sharing session from home pc to my laptop. While going through the set up guide I came across ssh tunneling. Even though there are many articles on the subject still it took me a considerable amount of googling, some experimenting and couple of Wireshark sessions to grasp what’s going under the hood. Most of the guides were incomplete in terms of explaining the concept which left me desiring for a good article on the subject with some explanatory illustrations. So I decided to write it my self. So here goes… Introduction A SSH tunnel consists of an encrypted tunnel created through a SSH protocol connection. A SSH tunnel can be used to transfer unencrypted traffic over a network through an encrypted channel. For example we can use a ssh tunnel to securely transfer files between a FTP server and a client even though the FTP protocol itself is not encrypted. SSH tunnels also provide a means to bypass firewalls that prohibits or filter certain internet services. For example an organization will block certain sites using their proxy filter. But users may not wish to have their web traffic monitored or blocked by the organization proxy filter. If users can connect to an external SSH server, they can create a SSH tunnel to forward a given port on their local machine to port 80 on remote web-server via the external SSH server. I will describe this scenario in detail in a little while.To set up a SSH tunnel a given port of one machine needs to be forwarded (of which I am going to talk about in a little while) to a port in the other machine which will be the other end of the tunnel. Once the SSH tunnel has been established, the user can connect to earlier specified port at first machine to access the network service. Port Forwarding SSH tunnels can be created in several ways using different kinds of port forwarding mechanisms. Ports can be forwarded in three ways.Local port forwarding Remote port forwarding Dynamic port forwardingI didn’t explain what port forwarding is. I found Wikipedia’s definition more explanatory. Port forwarding or port mapping is a name given to the combined technique oftranslating the address and/or port number of a packet to a new destination possibly accepting such packet(s) in a packet filter(firewall) forwarding the packet according to the routing table.Here the first technique will be used in creating an SSH tunnel. When a client application connects to the local port (local endpoint) of the SSH tunnel and transfer data these data will be forwarded to the remote end by translating the host and port values to that of the remote end of the channel. So with that let’s see how SSH tunnels can be created using forwarded ports with an examples. Tunnelling with Local port forwarding Let’s say that yahoo.com is being blocked using a proxy filter in the University. (For the sake of this example. . Cannot think any valid reason why yahoo would be blocked). A SSH tunnel can be used to bypass this restriction. Let’s name my machine at the university as ‘work’ and my home machine as ‘home’. ‘home’ needs to have a public IP for this to work. And I am running a SSH server on my home machine. Following diagram illustrates the scenario.To create the SSH tunnel execute following from ‘work’ machine. ssh -L 9001:yahoo.com:80 homeThe ‘L’ switch indicates that a local port forward is need to be created. The switch syntax is as follows. -L <local-port-to-listen>:<remote-host>:<remote-port>Now the SSH client at ‘work’ will connect to SSH server running at ‘home’ (usually running at port 22) binding port 9001 of ‘work’ to listen for local requests thus creating a SSH tunnel between ‘home’ and ’work’. At the ‘home’ end it will create a connection to ‘yahoo.com’ at port 80. So ‘work’ doesn’t need to know how to connect to yahoo.com. Only ‘home’ needs to worry about that. The channel between ‘work’ and ‘home’ will be encrypted while the connection between ‘home’ and ‘yahoo.com’ will be unencrypted. Now it is possible to browse yahoo.com by visiting http://localhost:9001 in the web browser at ‘work’ computer. The ‘home’ computer will act as a gateway which would accept requests from ‘work’ machine and fetch data and tunnelling it back. So the syntax of the full command would be as follows. ssh -L <local-port-to-listen>:<remote-host>:<remote-port> <gateway>The image below describes the scenario.Here the ‘host’ to ‘yahoo.com’ connection is only made when browser makes the request not at the tunnel setup time. It is also possible to specify a port in the ‘home’ computer itself instead of connecting to an external host. This is useful if I were to set up a VNC session between ‘work’ and ‘home’. Then the command line would be as follows. ssh -L 5900:localhost:5900 home (Executed from 'work')So here what does localhost refer to? Is it the ‘work’ since the command line is executed from ‘work’? Turns out that it is not. As explained earlier is relative to the , not the machine from where the tunnel is initiated. So this will make a connection to port 5900 of the ‘home’ computer where the VNC client would be listening in. The created tunnel can be used to transfer all kinds of data not limited to web browsing sessions. We can also tunnel SSH sessions from this as well. Let’s assume there is another computer (‘banned’) to which we need to SSH from within University but the SSH access is being blocked. It is possible to tunnel a SSH session to this host using a local port forward. The setup would look like this.As can be seen now the transferred data between ‘work’ and ‘banned’ are encrypted end to end. For this we need to create a local port forward as follows. ssh -L 9001:banned:22 homeNow we need to create a SSH session to local port 9001 from where the session will get tunneled to ‘banned’ via ‘home’ computer. ssh -p 9001 localhostWith that let’s move on to next type of SSH tunnelling method, reverse tunnelling. Reverse Tunnelling with remote port forwarding Let’s say it is required to connect to an internal university website from home. The university firewall is blocking all incoming traffic. How can we connect from ‘home’ to internal network so that we can browse the internal site? A VPN setup is a good candidate here. However for this example let’s assume we don’t have this facility. Enter SSH reverse tunnelling.. As in the earlier case we will initiate the tunnel from ‘work’ computer behind the firewall. This is possible since only incoming traffic is blocking and outgoing traffic is allowed. However instead of the earlier case the client will now be at the ‘home’ computer. Instead of -L option we now define -R which specifies a reverse tunnel need to be created. ssh -R 9001:intra-site.com:80 home (Executed from 'work')Once executed the SSH client at ‘work’ will connect to SSH server running at home creating a SSH channel. Then the server will bind port 9001 on ‘home’ machine to listen for incoming requests which would subsequently be routed through the created SSH channel between ‘home’ and ‘work’. Now it’s possible to browse the internal site by visiting http://localhost:9001 in ‘home’ web browser. The ‘work’ will then create a connection to intra-site and relay back the response to ‘home’ via the created SSH channel.As nice all of these would be still you need to create another tunnel if you need to connect to another site in both cases. Wouldn’t it be nice if it is possible to proxy traffic to any site using the SSH channel created? That’s what dynamic port forwarding is all about. Dynamic Port Forwarding Dynamic port forwarding allows to configure one local port for tunnelling data to all remote destinations. However to utilize this the client application connecting to local port should send their traffic using the SOCKS protocol. At the client side of the tunnel a SOCKS proxy would be created and the application (eg. browser) uses the SOCKS protocol to specify where the traffic should be sent when it leaves the other end of the ssh tunnel. ssh -D 9001 home (Executed from 'work')Here SSH will create a SOCKS proxy listening in for connections at local port  9001 and upon receiving a request would route the traffic via SSH channel created between ‘work’ and ‘home’. For this it is required to configure the browser to point to the SOCKS proxy at port 9001 at localhost.Reference: SSH Tunneling Explained from our JCG partner Buddhika Chamith at the Source Open blog....
software-development-2-logo

Is Copy and Paste Programming really a problem?

Copy and Paste Programming – taking a copy of existing code in your project and repurposing it – violates coding best practices like Don’t Repeat Yourself (DRY). It’s one of the most cited examples of technical debt, a lazy way of working, sloppy and short-sighted: an antipattern that adds to the long term cost of keeping a code base alive. But it’s also a natural way to get stuff done – find something that already works, something that looks close to what you want to do, take a copy and use it as a starting point. Almost everybody has done in at some point. This is because there are times when copy and paste programming is not only convenient, but it might also be the right thing to do. First of all, let’s be clear what I mean by copy and paste. This is not copying code examples off of the Internet, a practice that comes with its own advantages and problems. By copy and paste I mean when programmers take a shortcut in reuse – when they need to solve a problem that is similar to another problem in the system, they’ll start by taking a copy of existing code and changing what they need to. Early in design and development, copy and paste programming has no real advantage. The code and design are still plastic, this is your chance to come up with the right set of abstractions, routines and libraries to do what the system needs to do. And there’s not a lot to copy from anyways. It’s late in development when you already have a lot of code in place, and especially when you are maintaining large, long-lived systems, that the copy and paste argument gets much more complicated. Why Copy and Paste? Programmers copy and paste because it saves time. First, you have a starting point, code that you know works. All you have to do is figure out what needs to be changed or added. You can focus on the problem you are trying to solve, on what is different, and you only need to understand what you are going to actually use. You are more free to iterate and make changes to fit the problem in front of you – you can cleanup code when you need to, delete code that you don’t need. All of this is important, because you may not know what you will need to keep, what you need to change, and what you don’t need at all until you are deeper into solving the problem. Copy and paste programming also reduces risk. If you have to go back and change and extend existing code to do what it does today as well as to solve your new problem, you run the risk of breaking something that is already working. It is usually safer and less expensive (in the short term at least) to take a copy and work from there. What if you are building a new B2B customer interface that will be used by a new set of customers? It probably makes sense to take an existing interface as a starting point, reuse the scaffolding and plumbing and wiring at least and as much of the the business code as makes sense, and then see what you need to change. In the end, there will be common code used by both interfaces (after all, that’s why you are taking a copy), but it could take a while before you know what this code is. Finding a common design, the right abstractions and variations to support different implementations and to handle exceptions can be difficult and time consuming. You may end up with code that is harder to understand and harder to maintain and change in the future – because the original design didn’t anticipate the different exceptions and extensions, and refactoring can only take you so far. You may need a new design and implementation. Changing the existing code, refactoring or rewriting some of it to be general-purpose, shared and extendable, will add cost and risk to the work in front of you. You can’t afford to create problems for existing customers and partners just because you want to bring some new customers online. You’ll need to be extra careful, and you’ll have to understand not only the details of what you are trying to do now (the new interface), but all of the details of the existing interface, its behavior and assumptions, so that you can preserve all of it. It’s naïve to think that all of this behavior will be captured in your automated tests – assuming that you have a good set of automated tests. You’ll need to go back and redo integration testing on the existing interface. Getting customers and partners who may have already spent weeks or months to test the software to retest it is difficult and expensive. They (justifiably) won’t see the need to go through this time and expense because what they have is already working fine. Copying and pasting now, and making a plan to come back later to refactor or even redesign if necessary towards a common solution, is the right approach here. When Copy and Paste makes sense In Making Software’s chapter on “Copy-Paste as a Principled Engineering Tool”, Michael Godfrey and Cory Kapser explore the costs of copy and paste programming, and the cases where copy and paste make sense:Forking – purposely creating variants for hardware or platform variation, or for exploratory reasons. Templating –some languages don’t support libraries and shared functions well so it may be necessary to copy and paste to share code. Somewhere back in the beginning of time, the first COBOL programmer wrote a complete COBOL program – everybody else after that copied and pasted from each other. Customizing – creating temporary workarounds – as long as it is temporary. Microsoft’s practice of “clone and own” to solve problems in big development organizations. One team takes code from another group and customizes it or adapts it to their own purposes – now they own their copy. This is a common approach with open source code that is used as a foundation and needs to be extended to solve a proprietary problem.When Copy and Paste becomes a Problem When to copy and paste, and how much of a problem it will become over time, depends on a few important factors. First, the quality of what you are copying – how understandable the code is, how stable it is, how many bugs it has in it. You don’t want to start off by inheriting somebody else’s problems. How many copies have been made. A common rule of thumb from Fowler and Beck`s Refactoring book is “three strikes and you refactor”. This rule comes from recognizing that by making a copy of something that is already working and changing the copy, you’ve created a small maintenance problem. It may not be clear what this maintenance problem is yet or how best to clean it up, because only two cases are not always enough to understand what is common and what is special. But the more copies that you make, the more of a maintenance problem that you create – the cost of making changes and fixes to multiple copies, and the risk of missing making a change or fix to all of the copies increases. By the time that you make a third copy, you should be able to see patterns – what’s common between the code, and what isn’t. And if you have to do something in three similar but different ways, there is a good chance that there will be a fourth implementation, and a fifth. By the third time, it’s worthwhile to go back and restructure the code and come up with a more general-purpose solution. How often you have to change the copied code and keep it in sync – particularly, how often you have to change or fix the same code in more than one place. How well you know the code, do you know that there are clones and where to find them? How long it takes to find the copies, and how sure you are that you found them all. Tools can help with this. Source code analysis tools like clone detectors can help you find copy and paste code – outright copies and code that is not the same but similar (fuzzier matching with fuzzier results). Copied code is often fiddled with over time by different programmers, which makes it harder for tools to find all of the copies. Some programmers recommend leaving comments as markers in the code when you make a copy, highlighting where the copy was taken from, so that a maintenance programmer in the future making a fix will know to look for and check the other code. Copy and Paste programming doesn’t come for free. But like a lot of other ideas and practices in software development, copy and paste programming isn’t right or wrong. It’s a tool that can be used properly, or abused. Brian Foote, one of the people who first recognized the Big Ball of Mud problem in software design, says that copy and paste programming is the one form of reuse that programmers actually follow, because it works. It’s important to recognize this. If we’re going to Copy and Paste, let’s do a responsible job of it. Reference: Is Copy and Paste Programming really a problem? from our JCG partner Jim Bird at the Building Real Software blog....
software-development-2-logo

My Problem With Your Interviews

This article comes right after Facebook rejected me after 3 phone interviews, but it is not going to be a hate-post. In fact, I’ve been planning to write it for a couple of months. But now onto the topic: tech companies (Google, Facebook, VMWare, at least, but certainly many more) are all trying to find the best technical talent. (So they contacted me and asked if I’m interested in “exploring opportunities” with them). But how do they do that? The typical interview (be it a phone screen, or an onsite interview) consists of solving a problem. Some call these problems “puzzles”. They are usually non-real world problems that aim to verify your algorithmic skills and your computer science knowledge. The simple ones include recursion, binary search, basic data structures (linked list, hasthable, trees). The more complex ones require red-black trees, Dijkstra, knowledge of NP-completeness, etc. If you are on the phone, you write the code in a shared document. If onsite – you write it on a whiteboard. So, these puzzles should verify your computer science and algorithm skills. But let’s step back a little and see the picture from another angle.what you do on these interviews is something you never, ever do in real life: you write code without using any compiler or debugger. You do that in a limited time, with people watching you / waiting for you on the line. But let’s put that aside for now. Let’s assume that writing code without being able to run it is fine for interview purposes. the skills that these puzzles are testing are skills that the majority of developers have never needed. Most people are writing business software, and it does not require red-black trees. What was the last time you used recursion in your business software? So the last time you’ve done anything like that is in college. And many of these problems are really simple if you are a freshman, you did them as a homework just the other day. But then it becomes a bit more tedious to write even things as simple as a binary search. Because you just didn’t do it yesterday. Of course you will be able to do it, but for a little more time, so that you can remember, and for sure by using a compiler. (By the way, the puzzles at facebook were really simple. I didn’t do them perfectly though, which is my bad, perhaps due to interview anxiety or because I just haven’t done anything like that for the past 3 years) the skills tested are rarely what you will do in your daily work anyway. Even in these cool companies like Google and Facebook, there are still pretty regular projects that require coding to APIs, supporting existing code, etc. I don’t think you will be allowed to tweak the search engine in your first week, no matter how great you did on the interview interview preparation is suggested and actually required before these interviews. Exactly as if it is a college exam. But that’s dumb – you don’t want people to study to match your artificial interview criteria. You want them to be…good programmers. focusing on these computer science skills means these companies will probably miss good engineers that are simply not so interested in the low-level details.Btw, here’s an excerpt from my feedback after my first phone interview with Facebook: On the other hand, I don’t think having 1st year CS homework problems on interviews for senior developers is a great idea. One thing is – most people (including me) haven’t done this since university, and it looks a bit like trivia questions rather than actual programming. The problems outlined above are what I don’t like about these types of interviews. And that’s obviously because I don’t like solving these sorts of puzzles. I just don’t like them, they are not interesting for me. You could argue that in addition to your daily job, you can participate in programming competitions (like TopCoder) in order to keep your algorithm skills trained. I’ll give a short story about my high-school years. There were two student competitions – one was about exactly these types of programming puzzles – you are given a number of them for a fixed period of time, and you must submit a solution that covers as many of the pre-defined (but unknown to you) test cases as possible. The other competition was about creating a piece of software at home, and then presenting it in front of a jury. I was a top-competitor in the latter, and sucked quite a lot in the former. Why? Because I hated solving useless, unrealistic problems for the sake of solving them. I liked building software instead. I would probably be good at solving puzzles if I liked them. I just don’t. And these are not two levels of skill – one who can solve complex algorithmic puzzles (superior), and one who can’t, therefore he builds some piece of software (inferior). These are two different types of skills. And both of them are very useful in the process of creating good software. One writes the low-level stuff, the other one designs the APIs, the architecture, the deployment scheme, manages abstraction in the code. So, to get back to the question what I do now in addition to my daily job – I build stuff. I’ve worked on a few personal projects that I enjoyed. Way more than I would’ve enjoyed a TopCoder competition. Unfortunately these cool companies are hiring primarily the TopCoder-type of people. Which probably works for them, because they have a lot of candidates and they can afford a lot of “false-negatives”. But many smaller companies adopt these interview practices, and so they fail to get the best technical talent. The best article on software engineer interviewing I’ve read appeared just a few weeks ago. Jeff Atwood advised how to hire a programmer, and I completely support his approach. And my problem with interviews is that they don’t actually verify if you can do real programming work. And obviously my problem is that I don’t like low-level and algorithmic stuff, so I wouldn’t be able to work for cool companies like Google and Facebook. Important note: I’m not saying you should not know what computational complexity is, how a hashtable works, or how to write recursion. You should, because that is basic stuff that you need in order to be able to write good code. But focusing too much on these things is what I consider irrelevant to day-to-day programming. (And for the trolls: I wouldn’t have passed the 2 phone screens if I was a complete dumbass who can only write websites in PHP and thinks a hashtable is some sort of furniture) Reference: My Problem With Your Interviews from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog....
scrumalliance-logo

Software Engineering needs leaders, not ScrumMasters!

I recently reflected on SCRUM and the role of the ScrumMaster. We know that a ScrumMaster should act as a servant-leader; she should provide guidance but not decisions, removing impediments yet empowering the team: in a word, the ScrumMaster should act as a facilitator within the team, shielding the team from the outside world, ensuring that the team follows SCRUM best practices. We are also told that a SCRUM team should not elect a technical lead, since everyone in the team (with the exception of the ScrumMaster and Product Owner) should have the same responsibilites. First, I think that no matter what SCRUM says, in any team at least one technical leader is destined to emerge; this happened in every single team I worked in. There is always someone who naturally takes the technical leadership and to whom team members look up to for decisions; I think it is right that this person(s) shoudl lead the team. This clearly contradicts what SCRUM tells us and therefore I simply believe that in this respect SCRUM has got it all wrong. Secondly I think that, with the exception of teams approaching Agile and SCRUM for the first time and which would benefit from a ScrumMaster / Agile coach, the figure of ScrumMaster is unnecessary; I believe that experienced Agile teams know very well how to use Agile (SCRUM) tools and don’t need a facilitator to tell them what to do. Sprints, TDD, pair programming, retrospectives, team responsibility, etc. are all normal capabilities of an experienced Agile team. What is a servant-leader anyway? Look but don’t touch? Touch but don’t taste? Taste but don’t swallow? No, I think that the old nice concept of team lead / technical lead still holds true for most of the situations I worked with and that this represents a natural evolution of any team. We should delegate to the team the role of self-organising Sprints, removing impediments (more of a lean approach if you wish), grooming the burnup or burndown, run retrospectives, etc. I think the concept of a ScrumMaster goes very much in the direction of a Marketing campaign, where the typical consultant was suddendly able to re-sell herself as ScrumMaster, as an Agile coach, as a facilitator. Software engineering doesn’t need grey figures, which are there but whose presence can’t be felt, who take responsibility for only part of what the team delivers (e.g. the observation of SCRUM practices, the removal of impediments, etc). We need people who can act throughout the whole project lifecycle, who can take decisions front-to-back, who can take the responsibility of driving the team towards a clear direction even when the team thinks the direction should be another…We need leaders, not marketing campaigns and labels. Reference: Software Engineering needs leaders, not ScrumMasters! from our JCG partner Marco Tedone at the Marco Tedone’s blog....
apache-tomcat-logo

High Availability for Web applications

As more mission critical applications move to the cloud, making the application highly available becomes super critical. An application not available for whatever reason, web server down, database down etc mean lost users, lost revenue that can be devastating to your business. In this blog we examine some basic high availability concepts. Availability means your web application is available to your users to use. We would all like our applications to available 100% of the time. But for various reasons it does not happen. The goal of high availability is to make the application available as much as possible. Generally, availability is expressed as a percent of time that application is available per year. One may say availability is 99% or 99.9% and so on. Redundancy and failover are techniques used to achieve high availability. Redundancy is achieved by having multiple copies of your server. Instead of 1 apache web server, you have two. One is the active server. The active server is monitered and if for some reason it fails, you failover to the 2nd server which becomes active. Another approach is to use a cluster of active servers as is done in a tomcat clusters. All servers are active. A load balancer distributes load among the members of the cluster. If one or two member of the cluster go down, no users are affects because other servers continue processing. Of course, the load balancer can become a point of failure and needs redundancy and failover. If you were launching a new web application to the cloud, you might start of with a basic architecture as shown below without any HA consideration. Phase 1: 1 Tomcat web serverPhase 2: Tomcat cluster You add redundancy and scalability by using a tomcat cluster as shown in the figure below. The cluster is fronted by Apache Web server + mod_proxy which distributes requests to the individual server. Mod_proxy is the load balancer.Now the application scales horizontally. Tomcat or application failure is not an issue because there are other servers in the cluster. But we have introduced a new point a failure, the load balancer. If Apache+mod_proxy goes down, the application is unavailable. To read more about setting up a tomcat cluster see Tomcat clustering To learn how to use a load balancer with tomcat see Loadbalancing with Tomcat Phase 3: Highly available Tomcat cluster The figure below shows how to eliminate the point of failure and make the load balancer highly available.You add redundancy by adding a second apache+mod_proxy. However only one of the apache is active. The second apache is not handling any requests. It merely monitors the active server using a tool like heartbeat. If for some reason, the active server goes down, the 2nd server knows and the passive server takes over the ip address and starts handling requests. How does this happen ? This is possible because the ip address for this application that is advertised to the world is shared by the two apache’s. This is know as a virtual ip address. While the 2 servers share the virtual IP, TCP/IP routes packets to only the active server. When the active server goes down, the passive server tells TCP/IP to start routing packets intended for this ip address to it. There are TCP/IP commands that let the server start and stop listening on the virtual ip address. Tools like heartbeat and Ultramonkey enable you to maintain a heartbeat with another and failover when necessary. With heartbeat, there is a heartbeat process on each server. Config files have information on the virtual ip address, active server, passive server. There are several articles on the internet on how to setup heartbeat. In summary, you can build highly available applications using open source tools. The key cocepts of HA, redundancy, monitoring & failover, virtual ip address apply to any service and not just web servers. You can use the same concepts to make your database server highly available. Reference: High Availability for Web applications from our JCG partner Manoj Khangaonkar at the The Khangaonkar Report blog....
software-development-2-logo

Analysis of software developer’s competency – Choosing a right team member

In this post I shall try to explain an approach for estimating developer’s skills. The approach is still as a concept, which lacks some concrete decisions, but you can choose the thing which suits your case best. There are number of methods for estimation of developers skills, my favourite being the developer competency matrix. This method is very good, and it prooved to be useful for general estimation of different skills, for example as a part of a general employment assesment. But here it is relevant to estimate competency with relation to specific project, for example when choosing a right team members for a specific project, assuming we have a list of potential candidates to choose from. I would also like to automate this process, so the approach excludes the personal qualities for that reason – being not quantitative. This is no easy task due to vast space of different skills one might have, so I shall restrict myself only to most common types of software development, which are web and app development for the most common platforms. Additionally, most developers I work with are also involved in organizational activities, such as team management, customer relations, communication… I shall try to isolate these qualities out of this analysis and focus on pure technical skills. We shall try to represent this software developer skill space with different dimensions:programming language tools and libraries platform application type experience (lenght) role (depth)These dimensions are basically factors which are taken into account for the competency analysis. There may be other relevant dimensions added. Programming language dimension is pretty obvious, it is a distinct list of items such as c, c++, java… Tools and libraries dimension represents various IDE tools, compilers, editors, frameworks, and libraries which are used when developing software. Platform dimension is representing the environment in which application is deployed, and encapsulates both hardware and software environment. It is also a distinct list of items, such as windows, linux, iphone, desktop, silverlight, flash… Application type dimension represents the domain in which the software is being used, and it is also a distinct list of items such as “information system”, “online sales”, “banking”, “medical device”, “web portal”, “social network”… Experience dimension represents simply the length of experience for particular development which has already happened in the past. Role dimension represents level of the development activity and may contain items such as “apprentice developer”, “medior developer”, “senior developer”, “software architect”, “platform architect”… With these dimensions one should be able to define a metrics system for measurement of general competencies, or a measurement system for a specific project. Simplest thing to do is to represent the metrics as a linear combination of the dimensions, but there may be other useful methods as well. For a specific project different weights are given to any value in each dimension. This way, basically, we define what are we looking for in a candidate. If we accept only Java developer for a position, we would give other programming languages a zero weight. If we need a candidate to be a senior, but also accept medior developer, we assign appropriate weights to these values. Our candidates need to fill their values for every relevant dimension, or we extract that from the CV or an interview, giving us values we can work with. Then, for each candidate we calculate the metrics, and choose the one with the highest score! It would be interesting to create a web app which would allow online calculation of the competency based on a given criterion. Not to be forgotten, there are other – more human factors which need to be considered. I shall reflect on that in a separate post. Reference: Analysis of software developer’s competency – Choosing a right team member from our JCG partner Nenad Sabo at the Software thoughts blog....
java-interview-questions-answers

ADF Declarative Component example

In my previous post I promised to show  how to create ADF Declarative Component for Smart List Of Values. So, I’m going to create a component consisting of three elements: a label, an input text and a combobox list of values. That’s very easy. I created a separate ADF ViewController project in my work space:                                In this project open the Create JSF Declarative Component wizard:The new declarative component smartLovDef should have at least three attributes: some string for label, attribute binding for input text and LOV binding for combobox list of values:The wizard creates metadata file declarativecomp-metadata.xml and smartLovDef.jspx file where we can put the content of our component:The source code of smartLovDef.jspx looks like this: <?xml version='1.0' encoding='UTF-8'?> <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"           xmlns:f="http://java.sun.com/jsf/core"           xmlns:h="http://java.sun.com/jsf/html"           xmlns:af="http://xmlns.oracle.com/adf/faces/rich">   <jsp:directive.page contentType="text/html;charset=UTF-8"/>   <af:componentDef var="attrs" componentVar="component">         <af:panelLabelAndMessage label="#{attrs.label}" id="plam1">   <af:panelGroupLayout id="pgl1" layout="horizontal">     <af:inputText value="#{attrs.attrBinding.inputValue}"                   required="#{attrs.attrBinding.hints.mandatory}"                   columns="#{attrs.attrBinding.hints.displayWidth}"                   id="deptid" partialTriggers="departmentNameId"                  autoSubmit="true" simple="true"/>    <af:inputComboboxListOfValues id="departmentNameId"                 popupTitle="Search and Select: #{attrs.lovBinding.hints.label}"                 value="#{attrs.lovBinding.inputValue}"                 model="#{attrs.lovBinding.listOfValuesModel}"                 columns="#{attrs.lovBinding.hints.displayWidth}"                 shortDesc="#{attrs.lovBinding.hints.tooltip}"                 partialTriggers="deptid"                simple="true">   </af:inputComboboxListOfValues> </af:panelGroupLayout> </af:panelLabelAndMessage>    <af:xmlContent>       <component xmlns="http://xmlns.oracle.com/adf/faces/rich/component">         <display-name>smartLovDef</display-name>         <attribute>           <attribute-name>label</attribute-name>           <attribute-class>java.lang.String</attribute-class>           <required>true</required>         </attribute>         <attribute>           <attribute-name>attrBinding</attribute-name>           <attribute-class>java.lang.Object</attribute-class>           <required>true</required>         </attribute>         <attribute>           <attribute-name>lovBinding</attribute-name>           <attribute-class>java.lang.Object</attribute-class>           <required>true</required>         </attribute>         <component-extension>           <component-tag-namespace>cscomponent</component-tag-namespace>           <component-taglib-uri>/componentLib</component-taglib-uri>         </component-extension>       </component>     </af:xmlContent>   </af:componentDef> </jsp:root>The next step is to deploy the component into ADF Library. We have to add new deployment profile for the CSComponents project:And let’s deploy the project into library:The following step is to define File System connection in the resource palette to the deployment path of CSComponents project:After that we have to choose the project where we’re going to use the new component (in my case ViewConroller) and add CSComponents.jar library to it:Now we can use smartLovDef  component in our page and drag it from the component palette:In our jspx page the source code is going to look like this:     <cscompLib:smartLovDef label="#{bindings.DepartmentId.label}"                         attrBinding="#{bindings.DepartmentId}"                         lovBinding="#{bindings.DepartmentName}"                         id="sld1"/>Reference: ADF Declarative Component example from our JCG partner Eugene Fedorenko at the ADF Practice blog....
java-logo

ToString: Hexadecimal Representation of Identity Hash Codes

I have blogged before on the handy Apache Commons ToStringBuilder and I was recently asked what the seemingly cryptic text appearing in the generated String output constitutes. The colleague asking the question correctly surmised that what he was looking at was a hash code, but it did not match his instance’s hash code. I explained that ToStringBuilder adds the identity hash code in hexadecimal format to its output. In this post, I look in more depth at ToStringBuilder‘s use of the identity hash code presented in hexadecimal format. Even those not using ToStringBuilder might find this information useful as Java’s standard Object.toString() also uses a hexadecimal representation of what is effectively its identity hash code. I’ll begin with a very simple Java example using ToStringBuilder. This example uses three Java classes (Person.java, Employee.java, and Main.java) that are shown next. Person.java package dustin.examples;import org.apache.commons.lang.builder.ToStringBuilder;/** * A simple representation of a Person intended only to demonstrate Apache * Commons ToStringBuilder. * * @author Dustin */ public class Person { /** Person's last name (surname). */ protected final String lastName;/** Person's first name. */ protected final String firstName;/** * Parameterized constructor for obtaining an instance of Person. * * @param newLastName Last name of new Person instance. * @param newFirstName First name of new Person instance. */ public Person(final String newLastName, final String newFirstName) { this.lastName = newLastName; this.firstName = newFirstName; }/** * Provide String representation of this Person instance. * @return My String representation. */ @Override public String toString() { final ToStringBuilder builder = new ToStringBuilder(this); builder.append("First Name", this.firstName); builder.append("Last Name", this.lastName); return builder.toString(); } }Employee.java package dustin.examples;import java.util.Objects; import org.apache.commons.lang.builder.ToStringBuilder;/** * Simple class intended to demonstrate ToStringBuilder. * * @author Dustin */ public class Employee extends Person { /** Employee ID. */ private final String employeeId;/** * Parameterized constructor for obtaining an instance of Employee. * * @param newLastName Last name of the employee. * @param newFirstName First name of the employee. * @param newId Employee's employee ID. */ public Employee( final String newLastName, final String newFirstName, final String newId) { super(newLastName, newFirstName); this.employeeId = newId; }/** * Provide String representation of me. * * @return My String representation. */ @Override public String toString() { final ToStringBuilder builder = new ToStringBuilder(this); builder.appendSuper(super.toString()); builder.append("Employee ID", this.employeeId); return builder.toString(); }/** * Simple object equality comparison method. * * @param obj Object to be compared to me for equality. * @return {@code true} if the provided object and I are considered equal. */ @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final Employee other = (Employee) obj; if (!Objects.equals(this.employeeId, other.employeeId)) { return false; } return true; }/** * Hash code for this instance. * * @return My hash code. */ @Override public int hashCode() { int hash = 3; hash = 19 * hash + Objects.hashCode(this.employeeId); return hash; } }Main.java (Version 1) package dustin.examples;import static java.lang.System.out;/** * Simple class enabling demonstration of ToStringBuilder. * * @author Dustin */ public class Main { /** * Main function for running Java examples with ToStringBuilder. * * @param args the command line arguments */ public static void main(String[] args) { final Person person = new Person("Washington", "Willow"); out.println(person); final Employee employee = new Employee("Lazentroph", "Frank", "56"); out.println(employee); } }The above example is simple and its output is shown next:The output depicted above shows the String in question printed for both instance’s output generated by ToStringBuilder. The String representation of the instance of Person class includes the String “1f5d386″ and the String representation of the instance of Employee class includes the String “1c9b9ca”. These strings are the hexadecimal representation of each object’s identity hash code. The strings “1f5d386″ and “1c9b9ca” do not look like the integer hash codes many of us are used to seeing because of their hexadecimal representation. The Integer.toHexString(int) methods [available since JDK 1.0.2] is a convenience method for printing an integer in hexadecimal format and can be used to convert “normal” hash codes to see if they match those generated by ToStringBuilder. I have added calls to this method on the instances’ hash codes in the new version of the Main class. Main.java (Version 2) package dustin.examples;import static java.lang.System.out;/** * Simple class enabling demonstration of ToStringBuilder. * * @author Dustin */ public class Main { /** * Main function for running Java examples with ToStringBuilder. * * @param args the command line arguments */ public static void main(String[] args) { final Person person = new Person("Washington", "Willow"); out.println(person); out.println("\tHash Code (ten): " + person.hashCode()); out.println("\tHash Code (hex): " + Integer.toHexString(person.hashCode()));final Employee employee = new Employee("Lazentroph", "Frank", "56"); out.println(employee); out.println("\tHash Code (ten): " + employee.hashCode()); out.println("\tHash Code (hex): " + Integer.toHexString(employee.hashCode())); } }Executing the above leads to the following output:As the output indicates, the hexadecimal representation of the hash code for the Person instance does indeed match that shown in the ToStringBuilder-generated String for that instance. However, the same cannot be said for the Employee instance. The difference is that the Person class does not override the hashCode() method and so uses the identity hash code by default while the Employee class does override its own hashCode() (and therefore being different than the identity hash code). The third version of Main outputs the identity hash code using System.identityHashCode(Object) [discussed in further detail in my blog post Java's System.identityHashCode]. Main.java (Version 3) package dustin.examples;import static java.lang.System.out;/** * Simple class enabling demonstration of ToStringBuilder. * * @author Dustin */ public class Main { /** * Main function for running Java examples with ToStringBuilder. * * @param args the command line arguments */ public static void main(String[] args) { final Person person = new Person("Washington", "Willow"); out.println(person); out.println("\tHash Code (ten): " + person.hashCode()); out.println("\tHash Code (hex): " + Integer.toHexString(person.hashCode())); out.println("\t\tIdentity Hash (ten): " + System.identityHashCode(person)); out.println("\t\tIdentity Hash (hex): " + Integer.toHexString(System.identityHashCode(person)));final Employee employee = new Employee("Lazentroph", "Frank", "56"); out.println(employee); out.println("\tHash Code (ten): " + employee.hashCode()); out.println("\tHash Code (hex): " + Integer.toHexString(employee.hashCode())); out.println("\t\tIdentity Hash (ten): " + System.identityHashCode(employee)); out.println("\t\tIdentity Hash (hex): " + Integer.toHexString(System.identityHashCode(employee))); }With this in place, we can now compare the the identity hash code to the string generated by ToStringBuilder.The last example definitively demonstrates that ToStringBuilder includes the hexadecimal representation of the system identity hash code in its generated output. If one wants to use the hexadecimal representation of the overridden hash code rather than of the identity hash code, an instance of ToStringStyle (typically an instance of StandardToStringStyle) can be used and the method setUseIdentityHashCode(boolean) can be invoked with a false parameter. This instance of ToStringStyle can then be passed to the ToStringBuilder.setDefaultStyle(ToStringStyle) method. As a side note, the equals(Object) and hashCode() methods in the Employee class shown above were generated automatically by NetBeans 7.1. I was happy to see that, with my source version of Java for that project specified as JDK 1.7, this automatic generation of these two methods took advantage of the Objects class. I have used ToStringBuilder-generated output throughout this post to facilitate discussion of hexadecimal representations of identity hash codes, but I could have simply used the JDK’s own built-in “default” Object.toString() implementation for the same purpose. In fact, the Javadoc even advertises this: The toString method for class Object returns a string consisting of the name of the class of which the object is an instance, the at-sign character `@’, and the unsigned hexadecimal representation of the hash code of the object. In other words, this method returns a string equal to the value of: getClass().getName() + '@' + Integer.toHexString(hashCode()) The only reason I did not use this example to begin with is that I almost always override the toString() method in my classes and do not get this “default” implementation. However, when I use ToStringBuilder to implement my overridden toString() methods, I do see these hexadecimal representations. I am likely to reduce my use of ToStringBuilder as I increase my use of Objects.toString(). Many of us don’t think about hexadecimal representations or identity hash codes in our daily Java work. In this blog post, I have used ToStringBuilder‘s output as an excuse for looking a little closer at these two concepts. Along the way, I also briefly looked at the Integer.toHexString(Object) method, which is useful for printing numbers in their hexadecimal representation. Knowing about Java’s support for hexadecimal representation is important because it does show up in toString() output, in labeling of colors, memory addresses, and in other places. Reference: ToString: Hexadecimal Representation of Identity Hash Codes from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
software-development-2-logo

Quotes relating to System Design

There are a few quotes I think of when thinking about computer design. These are not specifically about computers, but I think they are appropriate. Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. — Antoine de Saint-Exupery, French writer (1900 – 1944) “Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.” Steve Jobs -[BusinessWeek, May 25, 1998,] Often contentious topics can labour under the assumption that it makes any difference in the first place. DEVENTER (n) A decision that’s very hard to make because so little depends on it, such as which way to walk around a park — The Deeper Meaning of Liff by Douglas Adams and John Lloyd. Everyone can learn something from your mistakes Mistakes – It could be that the purpose of your life is only to serve as a warning to others. — Ashleigh Brilliant on on Despair.com Remember to keep what you are doing relevant Computers make it easier to do a lot of things, but most of the things they make it easier to do don’t need to be done. ~Andy Rooney Information technology and business are becoming inextricably interwoven. I don’t think anybody can talk meaningfully about one without the talking about the other. ~Bill Gates Making things simple and clear should be much of the design effort. Not just something which works. A computer will do what you tell it to do, but that may be much different from what you had in mind. ~Joseph Weizenbaum Doing very little, often enough, can quickly add up. The inside of a computer is as dumb as hell but it goes like mad! ~Richard Feynman   Reference: Quotes relating to System Design from our JCG partner Peter Lawrey at the Vanilla Java blog. ...
espertech-logo

The complex (event) world

This blog entry attempts to summarize the technologies in the CEP domain, touching over their prime feature(s) as well as their lackings.  It seems sometimes that the term CEP is being overused (as did happen with ‘ESB’) and the write-up below reflects our perception and version of it. ESPER (http://esper.codehaus.org/) is a popular open source component for complex event processing (CEP) available for java. It includes rich support for pattern matching and stream processing based on sliding time or length windows. Despite the fervent discussion over the term ‘CEP’ (http://www.dbms2.com/2011/08/25/renaming-cep-or-not/), ESPER seem to be a good fit for the term CEP, as it appears to be able to really identify “complex events” from a stream of simple events, thanks to ESPER’s EPL (Event Processing Language).Recently, while searching for an open source solution for real time CEP, our group stumbled across Twitter’s Storm project (https://github.com/nathanmarz/storm). It claims to be most comparable to Yahoo’s S4, while being in the same space as “Complex Event Processing” systems like Esper and Streambase. I am not sure about Streambase, but digging deeper into the Storm project made it look much different from CEP and from the ESPER solution. Ditto with S4 (http://incubator.apache.org/s4/). While S4 and Storm seem to be good at real time stream processing in a distributed mode and they appeared (as they claim) to be the “Hadoop for Real Time”, they didn’t seem to have provisions to match patterns (and thus to indicate complex events).Searching for a definition for CEP (that our study can relate to) led to the following bullets, (http://colinclark.sys-con.com/node/1985400) which include the below four as prerequisite for a system/solution to be called a CEP component/project/solution:Domain Specific Language  Continuous Query  Time or Length Windows Temporal Pattern MatchingThe lack of continuous query supporting time/length windows and temporal pattern matching seem to altogether missing in current versions of the S4 and Storm projects. Probably, it is due to their infancy and they will mature up to include such features in future. As of now, they only seem fit for pre-processing the event stream before passing it over to a CEP engine like ESPER. Their ability to do distributed processing (a la map-reduce mode) can help to speed up the pre-processing where events can be filtered off, or enriched via some lookup/computation etc. There have also been some attempts to integrate Storm with Esper (http://tomdzk.wordpress.com/2011/09/28/storm-esper/). While the processing systems like S4 and Storm lack important features of CEP, ESPER based systems have the disadvantage of being memory bound. Having too many events or having a large time windows can potentially cause ESPER to run out of memory. If ESPER is used to process real time streams, for e.g. from a social media, there will be lot of data accumulating in the ESPER memory. On a high level, the problem statement is to invent a CEP solution for big data. On a finer level, the problem statement include architecting a CEP solution for handling on-board (batched) as well as in-flight (Real-Time) data. In DarkStar’s terminology (http://www.eventprocessing-communityofpractice.org/EPS-presentations/Clark_EP.pdf), the requirement is “having matched a registered pattern in real time, discover similar patterns in the database”. Since being memory bound is a limitation, it may prove useful if some mechanism to condense the in-memory events can be arrived at. The condensed data however should still be meaningful and retain the context of the original stream.DarkStar does this using Symbolic Aggregate Approximation (http://www.cs.ucr.edu/~eamonn/SAX.htm), and they claim to address the aforementioned requirements by using SAX together with AsterData’s nCluster which is a mpp (massively parallel processing) database with an embedded analytics engine based on SQL/MapReduce (http://www.asterdata.com/resources/mapreduce.php). to be continued (as we research further) … Reference: The complex (event) world from our JCG partner Abhishek Jain at the NS.Infra blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close