Featured FREE Whitepapers

What's New Here?

software-development-2-logo

4 Biggest Reasons Why Software Developers Suck at Estimation

Estimation is difficult. Most people aren’t good at it–even in mundane situations. For example, when my wife asks me how much longer it will take me to fix some issue I’m working on or to head home, I almost always invariably reply “five minutes.” I almost always honestly believe it will only take five minutes, but it never does. Most of the time my five minutes ends up being half an hour or more. But, in comparison to the world of software development efforts, my five minutes estimation is usually fairly accurate–it’s only off by a factor of six or so. It’s not unheard of to have software development estimations be off by as much as one-hundred-fold. I’ve literally had an hour long estimation turn into two weeks. But, why are software development estimations usually off by so much? Here are the four biggest reasons I have found: Reason 1: The unknown unknowns This phrase was first uttered by former Secretary of Defense of the United States, Donald Rumsfeld. It basically refers to those things that we don’t even know that we don’t know.By far, this is the biggest reason why software developers often flub at giving good estimations. It also happens to be the primary reason why I suck at telling my wife how long it will take me to get home–I don’t know about the distractions that I don’t know about yet. Software development has a lot of unknowns. Some of the unknowns we know about. When we first start a project, we might have a good idea that we need to store the data in a database, but we don’t know how we’ll do it. That is a known unknown. We know that we don’t know something that we’ll eventually need to know. Estimating a known unknown is pretty difficult, but we can do a decent job of it if we can compare it to something similar we’ve already done before. I don’t know how long it will take me to write this particular blog post, but I know about how long it has taken me to write other blog posts of a similar length. What is really scary though, are the things that we don’t even know we don’t know yet.   These unknown unknowns sneak up on us, because we don’t even know they exist–by definition they are unknowable. It’s one thing to know that we have a gap in a bridge somewhere that we have to cross, it is a whole other thing to have to cross a bridge blindfolded and only find out about gaps when you fall through them. Constantly, in software development, we are faced with situations where we have to encounter these unknown unknowns. There is no good way to estimate around them. The best we can do in these cases is give ourselves a lot of padding and a lot of rope so that we can climb out of any gaps in the bridge that we fall into. Reason 2: Lengthy time periods As if unknown unknowns weren’t bad enough, the deck is stacked against us even further. Most software development estimations involve fairly long periods of time. This has changed somewhat with Agile development practices, but we still are often asked to estimate a week or more worth of work at a time. (Plus, let’s not forget those sneaky project managers who try to throw Agile projects into Microsoft Project anyway and say “yeah, I know this is Agile and all, but I’m just trying to get a rough idea of when we’ll be done with all of the features.”) It’s fairly easy to estimate short periods of time–well, I guess unless it’s me telling my wife how long it will take for me to get off the computer. Most of us can pretty accurately predict how long it will take us to brush our teeth, write an email or eat dinner. Long periods of time are much more difficult to estimate accurately. Estimating how long it will take you to clean out the garage, write a book, or even just to go grocery shopping is much more challenging.  The longer the period of time you are trying to estimate, the more that small miscalculations and the effects of known unknowns can cause an initial estimate to be grossly off target. In my experience, I’ve found that estimating anything that will take more than about two hours is where things really start to go off of the rails. As a mental exercise, try and estimate things of varying lengths. How long will it take you to:do 10 pushups? make a cup of coffee? go to the convenience store and buy something? write a one page letter? read a 300 page novel? get the oil changed in your car?Notice how the things that can be done in under half an hour are very easy to estimate with a high level of confidence, but as you go out further in time it gets much more difficult. Most of the time when we do software development estimates, we don’t try and estimate short things, like how long it will take to write a single unit test, instead we tend to estimate longer things, like how long it will take to complete a feature. Reason 3: Overconfidence I’m pretty presumptuous when it comes to estimations. I usually think I’m very accurate at making estimations. My wife would disagree–at least when it comes to making estimations of time things will take me. History would probably tend to vindicate her viewpoint. As software developers, we can often become pretty convinced of our ability to accurately predict how long something will take. Often, if the programming task we are about to embark upon is one we feel confident about, we can be pretty aggressive with our estimates–sometimes to the point of absurdity.  How long will it take you to get that feature done? Oh, that? That’s easy. I can get that done in a few hours. I’ll have it done by tomorrow morning. Are you sure? What about testing? What if something comes up? Don’t worry, it’s easy. Shouldn’t be a problem at all. I just need to throw a few buttons on a page and hook up the backend code. But, what happens when you actually sit down and try to implement the feature? Well, first you find out that it wasn’t quite as easy as you thought. You forgot to consider a few of the edge cases that have to be handled. Pretty soon you find yourself taking the entire night to just get set up in order to actually start working on the problem. Hours turn into days, days into weeks and a month later, you’ve finally got some shippable code. Now, this might be a bit of an exaggeration, but overconfidence can be a big problem in software development–especially when it comes to estimations. Reason 4: Under-confidence Under-confidence isn’t actually a word. I suppose that is because someone wasn’t confident enough to put it in the dictionary. But, just as overconfidence can cause a software developer to under-estimate how long a programming task will take, under-confidence can cause that same software developer to overestimate a completely different task, which may even be much easier. I don’t know about you, but I often find myself in situations where I am very unsure of how long something will take. I can turn a simple task that I don’t feel comfortable doing into a huge mountain that seems almost impassible. We tend to view things that we’ve never done before as harder than they are and things that we have done before as easier than they are–it’s just human nature. Although it may not seem like it, under-confidence can be just as deadly to estimations though. When we are under-confident, we are more likely to add large amounts of padding to our estimates. This padding might not seem all that bad, but work has a way of filling the time allotted for it. (This is known as Parkinson’s law.) So, even though, when we are under-confident, it might appear that we are pretty accurate with our estimations, the truth is we may be wasting time by having work that might have been done in half the time fill the entire time that was allotted for it. (By the way, if you are looking for a good book on Agile estimation, check out Mike Cohn’s book: Agile Estimating and Planning.) What else? Did I leave anything out? What do you think is the biggest reason why software development estimations are so difficult?Reference: 4 Biggest Reasons Why Software Developers Suck at Estimation from our JCG partner John Sonmez at the Making the Complex Simple blog....
java-interview-questions-answers

Understanding Spring Web Application Architecture: The Classic Way

Every developer must understand two things:                    Architecture design is necessary. Fancy architecture diagrams don’t describe the real architecture of an application.The real architecture is found from the code that is written by developers, and if we don’t design the architecture of our application, we will end up with an application that has more than one architecture. Does this mean that developers should be ruled by architects? No. Architecture design is far too important to be left to the architects, and that is why every developer, who wants to be more than just a type writer, must be good at it. Let’s start our journey by taking a look at the two principles that will help us to design a better and a simpler architecture for our Spring powered web application. The Two Pillars of a Good Architecture Architecture design can feel like an overwhelming task. The reason for this is that many developers are taught to believe that architecture design must be done by people who are guardians of a mystical wisdom. These people are called software architects. However, the task itself isn’t so complicated than it sounds:Software architecture is the high level structure of a software system, the discipline of creating such a high level structure, and the documentation of this structure.Although it is true that experience helps us to create better architectures, the basic tools of an architecture design are actually quite simple. All we have to do is to follow these two principles: 1. The Separation of Concerns (SoC) Principle The Separation of Concerns (SoC) principle is specified as follows:Separation of concerns (SoC) is a design principle for separating a computer program into distinct sections, such that each section addresses a separate concern.This means that we should:Identify the “concerns” that we need to take care of. Decide where we want to handle them.In other words, this principle will help us the identify the required layers and the responsibilities of each layer. 2. The Keep It Simple Stupid (KISS) principle The Keep It Simple Stupid (KISS) principle states that:Most systems work best if they are kept simple rather than made complicated; therefore simplicity should be a key goal in design and unnecessary complexity should be avoided.This principle is the voice of reason. It reminds us that every layer has a price, and if we create a complex architecture that has too many layers, that price will be too high. In other words, we should not design an architecture like this:  I think that John, Judy, Marc, and David are guilty of mental masturbation. They followed the separation of concerns principle, but they forgot to minimize the complexity of their architecture. Sadly, this is a common mistake, and its price is high:Adding new features takes a lot longer than it should because we have to transfer information through every layer. Maintaining the application is pain in the ass impossible because no one really understands the architecture, and the ad-hoc decisions, that are made every, will pile up until our code base looks like a big pile of shit that has ten layers.This raises an obvious question: What kind of an architecture could serve us well? Three Layers Should Be Enough for Everybody If think about the responsibilities of a web application, we notice that a web application has the following “concerns”:It needs to process the user’s input and return the correct response back to the user. It needs an exception handling mechanism that provides reasonable error messages to the user. It needs a transaction management strategy. It needs to handle both authentication and authorization. It needs to implement the business logic of the application. It needs to communicate with the used data storage and other external resources.We can fulfil all these concerns by using “only” three layers. These layers are:The web layer is the uppermost layer of a web application. It is responsible of processing user’s input and returning the correct response back to the user. The web layer must also handle the exceptions thrown by the other layers. Because the web layer is the entry point of our application, it must take care of authentication and act as a first line of defense against unauthorized users. The service layer resides below the web layer. It acts as a transaction boundary and contains both application and infrastructure services. The application services provides the public API of the service layer. They also act as a transaction boundary and are responsible of authorization. The infrastructure services contain the “plumbing code” that communicates with external resources such as file systems, databases, or email servers. Often these methods are used by more than a one application service. The repository layer is the lowest layer of a web application. It is responsible of communicating with the used data storage.The components that belong to a specific layer can use the components that belong to the same layer or to the layer below it. The high level architecture of a classic Spring web application looks as follows:The next thing that we have to do is to design the interface of each layer, and this is the phase where we run into terms like data transfer object (DTO) and domain model. These terms are described in the following:A data transfer object is an object that is just a simple data container, and these objects are used to carry data between different processes and between the layers of our application. A domain model consists of three different objects:A domain service is a stateless class that provides operations which are related to a domain concept but aren’t a “natural” part of an entity or a value object. An entity is an object that is defined by its identity which stays unchanged through its entire lifecycle. A value object describes a property or a thing, and these objects don’t have their own identity or lifecycle. The lifecycle of a value object is bound to the lifecycle of an entity.Now that we know what these terms mean, we can move on and design the interface of each layer. Let’s go through our layers one by one:The web layer should handle only data transfer objects. The service layer takes data transfer objects (and basic types) as method parameters. It can handle domain model objects but it can return only data transfer objects back to the web layer. The repository layer takes entities (and basic types) as method parameters and returns entities (and basic types).This raises one very important question:Do we really need data transfer objects? Why cannot we just return entities and value objects back to the web layer?There are two reasons why this is a bad idea:The domain model specifies the internal model of our application. If we expose this model to the outside world, the clients would have to know how to use it. In other words, the clients of our application would have to take care of things that don’t belong to them. If we use DTOs, we can hide this model from the clients of our application, and provide an easier and cleaner API. If we expose our domain model to the outside world, we cannot change it without breaking the other stuff that depends from it. If we use DTOs, we can change our domain model as long as we don’t make any changes to the DTOs.The “final” architecture of a classic Spring web application looks as follows:There Are Many Unanswered Questions Left This blog post described the classic architecture of a Spring web application, but it doesn’t provide any answers to the really interesting questions such as:Why the layer X is responsible of the concern Y? Should our application have more than three or less than three layers? How should we design the internal structure of each layer?The reason for this is simple: We must learn to walk before we can run. The next blog posts of this tutorial will answer to these questions.Reference: Understanding Spring Web Application Architecture: The Classic Way from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
java-logo

Revealing the length of Garbage Collection pauses

There are several ways to improve your product. One such way is to carefully track what your users are experiencing and improve based on that. We do apply this technique ourselves and have again spent some time looking at different data Besides many other aspects we were after, we also posed a question “what is the worst-case effect for latency GC is triggering for an application”.  To answer the question we analyzed the data from 312 different JVMs attaching Plumbr Agent during the past two months. The  results were interesting and we decided to share the outcome with you:    On the X-axis there is the maximum length of the pause within this JVM, grouped into buckets. On the Y-axis there is the number of applications with maximum pause falling into a particular bucket. Using the data above, we can for example claim the following about the 312 JVMs being monitored:57 JVMs (18%) managed to keep GC pauses at bay with maximum pause under 256ms 73 JVMs (23%) faced a maximum GC pause in between 1024ms and 4095ms 105 JVMs (33%) stopped the application threads for 4 or more seconds due to GC. 43 JVMs (14%) faced a maximum GC pause longer than 16 seconds 18 JVMs (6%) contained a GC pause spanning for more than a minute Current record holder managed to stop all application threads for more than 16 minutes due to a garbage collection pause.We do admit that our data might be biased in regard that – the JVMs Plumbr ends up monitoring are more likely to suffer from performance issues triggering longer GC pauses. So there is a grain of salt you have to take these results with, but overall, the discoveries are still interesting. After all, tens of added seconds to the latency cannot be considered tolerable for majority of the applications out there. We have several hypotheses why the situation looks as bad as it currently does:In first case, engineers are not even aware that their application is performing so badly. Having no access to GC logs and being isolated from the customer support might completely hide the problem from the people who could be able to improve the situation Second case consists of people is struggling to reproduce the problem. As always, first step towards having a solution is building a reproducible test case in an environment where further experiments can be concluded. When the long-lasting GC pauses only occur in production environments, then coming up with a solution is a daunting task. Third group of the issues falls on the shoulders of engineers who are aware of the issue and can even reproduce the behaviour at will, but have no clues how to actually improve the situation. Tuning GC is a tricky task and requires a lot of knowledge about JVM internals, so most engineers in this situation find themselves in between a rock and a hard place.The good news is that we are working hard on making all those reasons obsolete – Plumbr surfaces the poorly-behaving GC issues, alerts you when these issues are detected and better yet, gives you a tailor-made solutions how to improve the behaviour. So instead of weeks of trial-and-error you are now able to surface and solve those cases in minutes.Reference: Revealing the length of Garbage Collection pauses from our JCG partner Nikita Artyushov at the Plumbr Blog blog....
software-development-2-logo

The DSL Jungle

DSLs are a common thing in the programming world nowadays. Many frameworks and tools decide to build a DSL for their…specific things. Builds tools are the primary candidates, but testing frameworks, web frameworks and whatnot also decide to define a DSL. With these DSLs you define build steps, web routing rules, test acceptance criteria, etc. What is the most common thing about all these DSLs? Two things. First, they are predominantly about configuration. Some specific way of configuring something specific to the tool or framework. The second thing is that you copy-paste code. Everytime I’m confronted with some DSL that is meant to help with my programming task, I end up copy-pasting examples or existing code, and then modifying it. Even though I’ve been working with a DSL for 8 months (from time to time), I just don’t remember its syntax. And you may say “yeah, that’s because you use bad DSLs”. Well, then I haven’t seen a good one yet. I’m currently using sbt, spray routing, cucumber for scala, previously I’ve used groovy and grails DSLs, and a few others along the way. But is it bad that you copy-paste existing pieces of code? Not always. You can, of course, base your configuration on existing, working pieces. But there are three issues – duplicate code, autocomplete and exploration. You know copy-pasting is wrong and leads to duplication. Not only that, but you may forget to change or remove something in the pasted code. And if you want to add some property, it would be good to be able to auto-complete it, rather than mistyping or, or forgetting whether it was “filePath”, “filepath”, “file-path” or just “path”. Having 2-3 DSLs in parts of a big project, you can’t remember all property names, so the alternative is to go and see the documentation (if you don’t have a working piece with that particular property to copy-paste from). Exploration is an even bigger issue. Especially when learning, or remembering how to do certain things with a given DSL, it is crucial to be able to explore the possibilities. What properties does this have, that might be useful? What does this property do exactly and does it have subproperties? What can I nest under this item? This is very important, regardless of your knowledge of the tool/framework. But with most DSLs you don’t have that. They either have some bizarre syntax, or they are JSON-based, or they look like the language you are using, but not quite, and hence even an IDE finds it difficult to understand them (spray being such an example). You either look at the documentation, or you copy-paste, or both. And you are kind of lost in this DSL jungle of ever so “cooler” DSLs that do a wide variety of things. And now I’ll drop the X-bomb. I love XML. Trusting the “XML configuration files are evil” meme has lead to many incomprehensible configurations, that are “short and easy to read and write”. Easy, if you remembered what those double-percentage signs meant compared to the single percentage signs, and where exactly to put the parentheses. In almost every scenario where someone decided that a DSL is a good idea, XML would have worked brilliantly. Using an XSD schema (which, I agree, is a bit tedious to write) you can make any XML-aware tool be turned into an IDE for configuration. Take the maven pom file, for example. Did you forget what element you could nest under “build”? Hit CTRL+space and you’ll find out. Being unified, you can read the XML configuration of any framework or tool that uses it, not just this particular one, that is the n-th DSL in a single project. While XML is verbose, it is straightforward and standard. So if you are writing a tool, and can’t make some configuration available via annotations or via very simple code (builders, setters, fluent interfaces), don’t go for a DSL. Don’t write DSLs where you can easily use XML. It will look good on your README.md, but your users will copy-paste all the time and may actually hate it. So please don’t contribute to the DSL jungle. And do you know why that is? Remember the initial note that these are DSLs you use when programming. Well, DSLs are not for programmers. DSLs are for non-programmers to express business logic in (almost) prose. Or at least their usage should be limited to that, where they can really excel. If you are making a tool for business analysts, feel free to design the most awesome DSL. If you are building a tool for programmers, don’t.Reference: The DSL Jungle from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
software-development-2-logo

Legacy Code to Testable Code #4: More Accessors!

This post is part of the “Legacy Code to Testable Code” series. In the series we’ll talk about making refactoring steps before writing tests for legacy code, and how they make our life easier. It continues the last post on accessors.   We talked about “setter” accessors as a mean to inject values. The other side of the coin is when we want to know something has happened inside our object. For example, if internal state has changed, or a non-public method was called. In fact, this is like the “if a tree fell in the woods” question: Why do we care if internal state changed? In legacy code, there are many cases where a class grows large and does many things. If we had separated responsibilities, the result would be possible to assert on another class. Alas, with god objects, things are a bit of a mess. When that happens, our acceptance criteria may move inside: We can either check internal state or internal method calls for its impact. To check internal state, we can add a “getter” method. Adding a “getter” function is easy, and if it doesn’t have logic (and it shouldn’t), it can expose the information without any harm done. If the refactoring tool begs you to add a “setter” you can set it to be private, so no one else uses it. Role reversal In a funny way, “getter” methods can reverse roles: We can use a “getter” method to inject a value by mocking it. So in our getAccount example: protected Bank getBank() { return new Bank(); }public void getAccount() { Bank tempBank = getBank(); ... By mocking the getBank method we can return a mockBank (according to our tools of choice): when(testedObject.getBank()).thenReturn(mockBank); On the other hand, we can verify a call on a “setter” instead of exposing a value. So if our Account object has an internal state called balance, instead of exposing it and checking it after the tested operation, we can add a “setter” method, and see if it was called. verify(account).setBalance(3); In contrast to injection, when we probe we don’t want to expose an object on the stack. It’s in the middle of an operation, and therefore not interesting (and hard to examine). If there’s an actual case for that, we can use the “setter” method verification option. In this example, the addMoney function calculates the interimBalance before setting the value back to currentBalance. public void addMoney(int amount) { int interimBalance = currentBalance; interimBalance += amount; currentBalance = interimBalance; } If we want to check the `currentBalance` before the calculation, we can modify the method to: public void addMoney(int amount) { int interimBalance = setInterim(currentBalance); interimBalance += amount; currentBalance = interimBalance; }protected void setInterim (int balance){ return balance; } Then in our test we can use verification as a precondition: verify(account).setInterim(100); Adding accessors is a solution for a problem that was created before we thought about tests: The design is not modular enough and has many responsibilities. It holds information inside it, and tests (and future clients) cannot access it. If we wrote it “right” the first time, the god class would probably have been written as a set of classes. With our tests in place, we want to get to a modular design. Tests give us the safety to change the code. So are the automated refactoring tools. We can start the separation even before our tests using the Extract Method refactoring pattern. We’re going to discuss it next.Reference: Legacy Code to Testable Code #4: More Accessors! from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
software-development-2-logo

My Five Rules for Remote Working

A couple of weeks ago, there was a stir (again) about remote working and its succes and/or failure: it was reported that Reddit, the website where many people lose countless of hours, were forcing all their employees to move to SF. After a similar thing happened at Yahoo last year it made me think about why remote work is such a huge success for us at Activiti and Alfresco. You see, I’m a remote worker for more than five years now. First at Red Hat and then at Alfresco. I worked a couple of years as Java consultant before that, so I’ve seen my share of office environments (checking my Linkedin, it comes down to about 10 different office environments). I had to go to these offices each day. Comparing those experiences, I can – without exaggeration – say that I’m way more productive nowadays, working from home. Many people (both in and outside IT) ask me how I do it. They say “they couldn’t do it”. Maybe that’s true. Maybe some people need a lot of people around them. But for the kind of job I am into – developing software – I believe having a lot of people around me doesn’t aid me in writing higher quality software faster. Anyway, like I said, I did some thinking around it and I came to the following “rules” which I have been following all these years which I believe are crucial (at least for me!) to making remote working a success.Rule 1: The Door Having a separate space to work is crucial when wanting do serious remote working. Mentally it is important that you can close “The Door” of your office space when you finished working. It brings some kind of closure to the working day. Many people, when they work from home, put their laptop on let’s say the kitchen table. That doesn’t work. It is not a space that encourages work. There are distractions everywhere (kids that come home, food very close by, …). But most importantly, there is no distinction between when you are working and when you are not. My wife and kids they know and understand that when The Door is closed, I’m at work. I can’t be disturbed until that Door opens. But when I close The Door in the evening and come downstairs, they also know that I’m fully available for them.Rule 2: The Gear The second rule is related to the first one: what to put in that room. The answer is simple: only the best. A huge desk, a big-ass 27″ monitor (or bigger), a comfortable chair (your ass spends a lot of time on it), the fastest internet you can buy, some quality speakers, a couple of cool posters and family pictures on the wall, …. This is the room where you spend most of your time in the week, so you need to make it a place where you love to go to.Often, I hear from people which company allows for remote work that their company should pay for all of this. I think that’s wrong. It’s a two-way street: your company gives you the choice, privilege and trust to work from home, so you from your side must take care that your home office isn’t decreasing anything compared to the office gear you have. Internet connection, chair and computer monitor are probably the most important bits here. If you try to be cheap on any of those, you’ll repay it in decreased productivity. Rule 3: The Partner Your partner is of utmost importance to make remote work a success. Don’t be fooled by the third place here, when your partner is not into it, all the other points are useless. It’s pretty simple and comes down to one core agreement you need to make when working from home: when you are working from home you are not “at home”. When you work, there is no time for cleaning the house, doing the dishes, mowing the grass, etc … You are at work, and that needs to be seen as a full-time, serious thing. Your partner needs to understand that when you would do any of these things, it would be bad for your career. Many people think this is easy, but I’ve seen many fail. A lot of people still see working from home as something that is not the same as “regular work”. They think you’ve got all the time in the world now. Wrong. Talk it through with your partner. If he/she doesn’t see it (or is jealous), don’t do it. Rule 4: Communicate, communicate, communicate More than a team in an office, you need to communicate. If you don’t communicate, you simply don’t exist. At Activiti, we are skyping a lot during the day. We all know exactly what the other team members are currently doing. We have an informal agreement that we don’t announce a call typically. You just press the ‘call’ button and the other side has to pick it up and respond. It’s the only way remote work can work. Communicate often. Also important: when you are away from your laptop, say it in a common chat window. There is nothing as damaging for remote workers as not picking up Skype/Phone for no reason. Rule 5: Trust People The last rule is crucial. Working remote is based on trust. Unlike in the office, there is no physical proof that you are actually working (although being physically in an office is not correlated with being productive!). You need to trust people that they do their job. But at the same time, don’t be afraid to check up on people’s work (for us, those are the commits) and ask the questions why something is taking longer than expected. Trust grows both ways. The second part of this trust-story is that there needs to be trust from the company to the team. If that trust is missing, your team won’t be working remote for long. At Activiti, we are very lucky to have Paul Holmes Higgin as our manager. He is often in the office of Alfresco and makes sure that whatever we are doing is known to the company and vice versa. He attends many of the (online) meetings that happen company wide all the time so that we are free to code. There is nothing as bad for a remote team as working in isolation.Conclusion So those are my five (personal!) rules I follow when working from home. With all these bad press from the likes of Reddit and Yahoo, I thought it was time for some positive feedback. Remote work is perfect for me: it allows me to be very productive, while still being able to see my family a lot. Even though I put in a lot of hours every week, I’m still seeing my kids grow up every single day and I am there for them when they need me. And that is something priceless.Reference: My Five Rules for Remote Working from our JCG partner Joram Barrez at the Small steps with big feet blog....
spring-interview-questions-answers

Spring Rest API with Swagger – Integration and configuration

Nowadays, exposed APIs are finally getting the attention they deserve and companies are starting to recognize their strategic value. However, working with 3rd party APIs can be really tedious work, especially when these APIs are not maintained, ill designed or missing any documentation. That’s why I decided to look around for ways to provide fellow programmers and other team members with proper documentation when it comes to integration. One way to go is to use WADL, which is a standard specifically designed to describe HTTP based web applications (like REST web services). However there are few drawback when using WADL that made me look elsewhere for solutions how to properly document and expose API documentation.   Swagger Another way might be to go with Swagger. Swagger is both specification and framework implementation that supports full life cycle of RESTful web services development. The specification itself is language-agnostic, which might come in handy in heterogeneous environment. Swagger also comes with Swagger UI module which allows both programmers and other team members to meaningfully interact with APIs and gives them a way to work with it while providing access to the documentation. Spring with Jersey example Not long ago, I came across an article describing Swagger specification and I was pretty intrigued to give it a try. At that time I was working on a sweet little microservice so I had an ideal testing ground to try it out. Based on that I prepared a short example about how to use Swagger in your application, when you are using Spring framework and Jersey. Example code models simplified REST API for a subset of possible APIs in a shop application scenario. Note: Import declarations were omitted from all Java code samples. Jersey servlet Before we get down to introducing Swagger to our code, lets take a moment and explore our example a little. First of all, lets look at web.xml. There is plain old web.xml with few simple declarations and mappings in code sample below. Nothing special here, just a bunch of configuration. <web-app id="SpringWithSwagger" version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <display-name>Spring Jersey Swagger Example</display-name><context-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:beans.xml</param-value> </context-param><listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener><servlet> <servlet-name>jersey-serlvet</servlet-name> <servlet-class>org.glassfish.jersey.servlet.ServletContainer</servlet-class> <init-param> <param-name>javax.ws.rs.Application</param-name> <param-value>com.jakubstas.swagger.SpringWithSwagger</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet><servlet-mapping> <servlet-name>jersey-serlvet</servlet-name> <url-pattern>/rest/*</url-pattern> </servlet-mapping> </web-app> Endpoint Second thing we are going to need is the endpoint that defines our REST service – for example employee endpoint for listing current employees. Once again, there is nothing extraordinary, only a few exposed methods providing core API functionality. package com.jakubstas.swagger.rest;@Path("/employees") public class EmployeeEndpoint {private List<Employee> employees = new ArrayList<Employee>();{ final Employee employee = new Employee(); employee.setEmployeeNumber(1); employee.setFirstName("Jakub"); employee.setSurname("Stas");employees.add(employee); }@OPTIONS public Response getProductsOptions() { final String header = HttpHeaders.ALLOW; final String value = Joiner.on(", ").join(RequestMethod.GET, RequestMethod.OPTIONS).toString();return Response.noContent().header(header, value).build(); }@GET @Produces(MediaType.APPLICATION_JSON) public Response getEmployees() { return Response.ok(employees).build(); } } Swagger dependencies First thing we need to do is to include all required Swagger dependencies in our pom.xml as shown below (lucky for us it’s only a single dependency). <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> ... <properties> ... <swagger-version>1.3.8</swagger-version> ... </properties> ... <dependencies> ... <!-- Swagger --> <dependency> <groupId>com.wordnik</groupId> <artifactId>swagger-jersey2-jaxrs_2.10</artifactId> <version>${swagger-version}</version> </dependency> ... </dependencies> </project> Swagger configuration Now, lets take a look how Swagger integrates into our example. As with any introduction of a new dependency in your project, you should be concerned with how invasive and costly this process will be. The only effected places will be your REST endpoints, Spring configuration and some transfer objects (given you choose to include them) as you will see in following code samples. This means that there is no configuration needed in web.xml for Swagger to work with your Spring application, meaning it’s rather non-invasive in this way and remains constrained within APIs realm. You need three basic properties for Swagger to work:API versionProvides the version of the application APIbase pathThe root URL serving the APIresource packageDefines package where to look for Swagger annotationsSince API maintenance is primarily responsibility of analysts and programmers, I like to keep this configuration in a separate property file called swagger.properties. This way it is not mixed with application configuration and is less likely to be modified by accident. Following snippet depicts such a configuration file. swagger.apiVersion=1.0 swagger.basePath=http://[hostname/ip address]:[port]/SpringWithSwagger/rest swagger.resourcePackage=com.jakubstas.swagger.rest For a second part of configuration I created a configuration bean making use of previously mentioned properties. Using Spring’s @PostConstruct annotation providing bean life-cycle hook, we are able to instantiate and set certain attributes that Swagger requires, but is not able to get (in current version at least). package com.jakubstas.swagger.rest.config;/** * Configuration bean to set up Swagger. */ @Component public class SwaggerConfiguration {@Value("${swagger.resourcePackage}") private String resourcePackage;@Value("${swagger.basePath}") private String basePath;@Value("${swagger.apiVersion}") private String apiVersion;@PostConstruct public void init() { final ReflectiveJaxrsScanner scanner = new ReflectiveJaxrsScanner(); scanner.setResourcePackage(resourcePackage);ScannerFactory.setScanner(scanner); ClassReaders.setReader(new DefaultJaxrsApiReader());final SwaggerConfig config = ConfigFactory.config(); config.setApiVersion(apiVersion); config.setBasePath(basePath); }public String getResourcePackage() { return resourcePackage; }public void setResourcePackage(String resourcePackage) { this.resourcePackage = resourcePackage; }public String getBasePath() { return basePath; }public void setBasePath(String basePath) { this.basePath = basePath; }public String getApiVersion() { return apiVersion; }public void setApiVersion(String apiVersion) { this.apiVersion = apiVersion; } } Last step is to declare following three Swagger beans: ApiListingResourceJSON, ApiDeclarationProvider and ResourceListingProvider. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <context:component-scan base-package="com.jakubstas.swagger" /> <context:property-placeholder location="classpath:swagger.properties" /><bean class="com.wordnik.swagger.jaxrs.listing.ApiListingResourceJSON" /> <bean class="com.wordnik.swagger.jaxrs.listing.ApiDeclarationProvider" /> <bean class="com.wordnik.swagger.jaxrs.listing.ResourceListingProvider" /> </beans> Swagger is now configured and you can check whether your setup is working properly. Just enter the URL from your basePath variable followed by /api-docs into your browser and check the result. You should see an output similar to following snippet I received after accessing http://[hostname]:[port]/SpringWithSwagger/rest/api-docs/ in my example. {"apiVersion":"1.0","swaggerVersion":"1.2"} What is next? If you followed all steps you should now have working setup to start with an API documentation. I will showcase how to describe APIs using Swagger annotations in my next article called Spring Rest API with Swagger – Creating documentation. The code used in this micro series is published on GitHub and provides examples for all discussed features and tools. Please enjoy!Reference: Spring Rest API with Swagger – Integration and configuration from our JCG partner Jakub Stas at the Jakub Stas blog....
software-development-2-logo

How To Control Access To REST APIs

Exposing your data or application through a REST API is a wonderful way to reach a wide audience. The downside of a wide audience, however, is that it’s not just the good guys who come looking.         Securing REST APIs Security consists of three factors:Confidentiality Integrity AvailabilityIn terms of Microsoft’s STRIDE approach, the security compromises we want to avoid with each of these are Information Disclosure, Tampering, and Denial of Service. The remainder of this post will only focus on Confidentiality and Integrity. In the context of an HTTP-based API, Information Disclosure is applicable for GET methods and any other methods that return information. Tampering is applicable for PUT, POST, and DELETE. Threat Modeling REST APIs A good way to think about security is by looking at all the data flows. That’s why threat modeling usually starts with a Data Flow Diagram (DFD). In the context of a REST API, a close approximation to the DFD is the state diagram. For proper access control, we need to secure all the transitions. The traditional way to do that, is to specify restrictions at the level of URI and HTTP method. For instance, this is the approach that Spring Security takes. The problem with this approach, however, is that both the method and the URI are implementation choices. URIs shouldn’t be known to anybody but the API designer/developer; the client will discover them through link relations. Even the HTTP methods can be hidden until runtime with mature media types like Mason or Siren. This is great for decoupling the client and server, but now we have to specify our security constraints in terms of implementation details! This means only the developers can specify the access control policy. That, of course, flies in the face of best security practices, where the access control policy is externalized from the code (so it can be reused across applications) and specified by a security officer rather than a developer. So how do we satisfy both requirements? Authorizing REST APIs I think the answer lies in the state diagram underlying the REST API. Remember, we want to authorize all transitions. Yes, a transition in an HTTP-based API is implemented using an HTTP method on a URI. But in REST, we shield the URI using a link relation. The link relation is very closely related to the type of action you want to perform. The same link relation can be used from different states, so the link relation can’t be the whole answer. We also need the state, which is based on the representation returned by the REST server. This representation usually contains a set of properties and a set of links. We’ve got the links covered with the link relations, but we also need the properties. In XACML terms, the link relation indicates the action to be performed, while the properties correspond to resource attributes. Add to that the subject attributes obtained through the authentication process, and you have all the ingredients for making an XACML request! There are two places where such access control checks comes into play. The first is obviously when receiving a request. You should also check permissions on any links you want to put in the response. The links that the requester is not allowed to follow, should be omitted from the response, so that the client can faithfully present the next choices to the user. Using XACML For Authorizing REST APIs I think the above shows that REST and XACML are a natural fit. All the more reason to check out XACML if you haven’t already, especially XACML’s REST Profile and the forthcoming JSON Profile.Reference: How To Control Access To REST APIs from our JCG partner Remon Sinnema at the Secure Software Development blog....
java-logo

Understanding strategy pattern by designing game of chess

Today we will try to understand Strategy Pattern with the help of an example. The example we will consider is The Game of Chess. The intention here is to explain strategy pattern and not to build a comprehensive Chess Game solution. Strategy Pattern : The Strategy pattern is known as a behavioural pattern – it’s used to manage algorithms, relationships and responsibilities between objects. The main benefit of strategy pattern is to choose the algorithm/behaviour at runtime.        Lets try to understand this by implementing this to design the chess game. In chess there are different characters like King, Queen, Bishop and all of them have different moves. There could be many possible solutions to this design, lets explore one by one :The first way would be to define movement in each and every class, every character will have its own move() implementation. In this way there is no code reusability and we can not change the implementation at run time. Make a separate MovementController Class and put an if else for each type of movement of an object.public class BadDesginCharacterMovementController {public void move(Character character){ if(character instanceof King){ System.out.print("Move One Step forward"); }else if(character instanceof Queen){ System.out.print("Move One Step forward"); }else if(character instanceof Bishop){ System.out.print("Move diagonally"); } } } This is a poor design, with strong coupling, moreover using if/else makes it ugly. So, we would like to have a design where we can have loose coupling, where we can decide the movement algorithm at run time and there is code reusability. Lets see this complete implementation using Strategy Pattern. Below is that class diagram of our implementation:The complete source code can be downloaded from here.We will have our base abstract class as Character Class, which all the characters can extend and set their own MovementBehaviour implementation. public class Character {private MovementBehaviour movementBehaviour;String move(){ return movementBehaviour.move(); }public void setMovementBehaviour(MovementBehaviour movementBehaviour) { this.movementBehaviour = movementBehaviour; } } This class has a movement Behaviour: public interface MovementBehaviour {String move(); } So, each Character : King,Queen,Bishop will extend Character and they can have their own implementation of Movement Behaviour. public class King extends Character {public King() { setMovementBehaviour(new SingleForward()); } } Here for simplicity, I have called the setMovemementBehaviour method inside the constructor of King. Similarly, another character Queen can be defined as : public class Queen extends Character {public Queen() { setMovementBehaviour(new SingleForward()); } } And, Bishop as : public class Bishop extends Character {public Bishop() { setMovementBehaviour(new DiagonalMovement()); } } The implementation of different movements can be as follows: Single Forward : public class SingleForward implements MovementBehaviour {@Override public String move() { return "move one step forward"; } } Diagonal Movement: public class DiagonalMovement implements MovementBehaviour {@Override public String move() { return "Moving Diagonally"; } } With this example we can understand the Strategy Pattern.Reference: Understanding strategy pattern by designing game of chess from our JCG partner Anirudh Bhatnagar at the anirudh bhatnagar blog....
software-development-2-logo

CallSerially The EDT & InvokeAndBlock (Part 1)

We last explained some of the concepts behind the EDT in 2008 so its high time we wrote about it again, there is a section about it in the developer guide as well as in the courses on Udemy but since this is the most important thing to understand in Codename One it bares repeating. One of the nice things about the EDT is that many of the concepts within it are similar to the concepts in pretty much every other GUI environment (Swing/FX, Android, iOS etc.). So if you can understand this explanation this might help you when working in other platforms too. Codename One can have as many threads as you want, however there is one thread created internally in Codename One named “EDT” for Event Dispatch Thread. This name doesn’t do the thread justice since it handles everything including painting etc. You can imagine the EDT as a loop such as this: while(codenameOneRunning) { performEventCallbacks(); performCallSeriallyCalls(); drawGraphicsAndAnimations(); sleepUntilNextEDTCycle(); } The general rule of the thumb in Codename One is: Every time Codename One invokes a method its probably on the EDT (unless explicitly stated otherwise), every time you invoke something in Codename One it should be on the EDT (unless explicitly stated otherwise). There are a few notable special cases:NetworkManager/ConnectionRequest – use the network thread internally and not the EDT. However they can/should be invoked from the EDT. BrowserNavigationCallback – due to its unique function it MUST be invoked on the native browser thread. Displays invokeAndBlock/startThread – create completely new threads.Other than those pretty much everything is on the EDT. If you are unsure you can use the Display.isEDT method to check whether you are on the EDT or not. EDT Violations You can violate the EDT in two major ways:Call a method in Codename One from a thread that isn’t the EDT thread (e.g. the network thread or a thread created by you). Do a CPU intensive task (such as reading a large file) on the EDT – this will effectively block all event processing, painting etc. making the application feel slow.Luckily we have a tool in the simulator: the EDT violation detection tool. This effectively prints a stack trace to suspect violations of the EDT. Its not fool proof and might land your with false positives but it should help you with some of these issues which are hard to detect. So how do you prevent an EDT violation? To prevent abuse of the EDT thread (slow operations on the EDT) just spawn a new thread using either new Thread(), Display.startThread or invokeAndBlock (more on that later). Then when you need to broadcast your updates back to the EDT you can use callSerially or callSeriallyAndWait. CallSerially callSerially invokes the run() method of the runnable argument it receives on the Event Dispatch Thread. This is very useful if you are on a separate thread but is also occasionally useful when we are on the EDT and want to postpone actions to the next cycle of the EDT (more on that next time). callSeriallyAndWait is identical to call serially but it waits for the callSerially to complete before returning. For obvious reasons it can’t be invoked on the EDT. In the second part of this mini tutorial I will discuss invokeAndBlock and why we might want to use callSerially when we already are on the EDT.Reference: CallSerially The EDT & InvokeAndBlock (Part 1) from our JCG partner Shai Almog at the Codename One blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close