Featured FREE Whitepapers

What's New Here?

mockito-logo

Mockito 101

Mockito is a mocking framework that lets you write beatiful tests with clean and simple API. It biases toward minimal specifications, makes different behaviors look different, and displays clear error messages. Creating Mocks To create a mock using Mockito, simply annotate mocks with @Mock and call MockitoAnnotations.initMocks(this).       import org.mockito.Mock; import org.mockito.MockitoAnnotations;public class FooClassTest { @Mock mockFoo;public void setUp() { MockitoAnnotations.initMocks(this); ... } ... } Stubbing values Stubbing values can stimulate the behavior of exsiting code or be a temporary substitute for yet-to-be-developed code. By default, for all methods that return value, mock returns null, an empty collection or appropriate primitive/primitive wrapper value (e.g: 0, false, …). You can override the stubbing values as below. Once stubbed, the method will always return stubbed value regardless of how many times it is called. For a method with a void return, ususally we do not need to stub it. import static org.mockito.Mockito.doThrow; import static org.mockito.Mockito.when; ... // a method that returns values when(mockFoo.someCall()).thenReturn(someValue); when(mockFoo.someCall()).thenThrow(new FooException()); // a method with a void return doThrow(new FooException()).when(mockFoo).voidMethodThatThrows(); Verifying a method was called // call the subject under test verify(mockFoo, times(2)).someCall(); verify(mockFoo).someCall(); verify(mockFoo).callWithVoidReturnType(); What is the difference between “stubbying” and “verifying”? In a nutshell, “stubbing” should be used for the items that you don’t really care about, but they are necessary to make the test pass. In contrast, “verifying” should be used to verify the behavior. Verifying the Order of Calls to a Single Object InOrder order1 = Mockito.inOrder(mockFoo); order1.verify(mockFoo).firstCall(); order1.verify(mockFoo).thirdCall();InOrder order2 = Mockito.inOrder(mockFoo); order2.verify(mockFoo).secondCall(); order2.verify(mockFoo).fifthCall(); Verifying the Order of Calls Across Multiple Objects Foo mockFoo = Mockito.mock(Foo.class); Bar mockBar = Mockito.mock(Bar.class);// call the subject under test InOrder order = Mockito.inOrder(mockFoo, mockBar) order.verify(mockFoo).firstCall(); order.verify(mockBar).secondCall(); Verifying That Only the Expected Calls Were Made In general, tests for no more interactions should be rare. // call the subject under test verify(mockFoo).expectedCall(); verify(mockFoo).someOtherExpectedCall(); verifyNoMoreInteractions(mockFoo); Verifying That Specific Calls Are Not Made Testing that a specific call was not made is often better than checking for “no more calls.” // call the subject under test verify(mockStream, never()).close(); Matchers We can use matchers for mocked method parameters when == and equals cannot be used to match a parameter, either for stubbing or verifying. If you find that you need complicated matchers, consider simplifying your subject under test or your tests, or consider using a hand-rolled fake instead of a mock. import static org.mockito.Mockito.*;// Both of these forms use "equals" when(mockFoo.set("blah", 2)).thenReturn(value); when(mockFoo.set(eq("blah"), eq(2))).thenReturn(value);when(mockFoo.set(contains("la"), eq(2))).thenReturn(value); when(mockFoo.set(eq("blah"), anyInt())).thenReturn(value); when(mockFoo.set(anyObject(), eq(2))).thenReturn(value); when(mockFoo.set(isA(String.class), eq(2))).thenReturn(value); when(mockFoo.set(same(expected), eq(2))).thenReturn(value);ArgumentCaptor<String> sArg = ArgumentCaptor.forClass(String.class); when(mockFoo.set(sArg.capture(), eq(2))).thenReturn(value); ... // returns last captured value String capturedString = sArg.getValue(); List<String> capturedStrings = sArg.getAllValues(); Partial Mocks When using spy or CALLS_REAL_METHODS, you may want to use the alternative stubbing syntax that does not call the existing method or stub: doReturn("The spy has control.").when(mockFoo).aMethod(). import org.mockito.Mockito;Foo mockFoo = Mockito.spy(new Foo()); // Note: instance, not class. // Note: "when" calls the real method, see tip below. when(mockFoo.aMethod()).thenReturn("The spy has control."); // call the subject under test verify(mockFoo).aMethod(); // Verify a call to a real method was made. verify(mockFoo).someRealMethod(); // Alternative construct, that will fail if an unstubbed abstract // method is called. Foo mockFoo = Mockito.mock(Foo.class, Mockito.CALLS_REAL_METHODS);Reference: Mockito 101 from our JCG partner Yifan Peng at the PGuru blog....
software-development-2-logo

High availability design

If you have ever travelled in an Indian Railways you would have noticed that the capacity for which the train is supposed is handle holds no meaning because the number of people it will be carrying is just going to be way over. That’s how the passenger load and platforms all across the country are managed. The method mostly works fine, but from time to time there are breakdown and trains are delayed, sometime cancelled but the life goes on as people expect that this will happen with Indian Railways. When we design and write those big platforms/software something similar happens but the biggest difference is that customers/clients who have paid for the software don’t like those downtimes (cancellations) and slowness (delays). Last 2 years or so I have had so many conversations where 2 key NFRs intersect – Performance and Availability. I have noticed and started to realize that while these two end up being joint at the hip and need to work closely together they still mean a world of difference and what it means when we speak about performance and availability and each of these need to be addressed differently. Of course, eventually with all many fixes you will eradicate a lot of cases that led to failures but it would have taken you so long and the reputation that the brand holds do dear is already damaged. The Start Most project (almost all) in today’s world have some Non Functional Requirements and the 3 that take the top most priority are Performance, Availability and Security. Some numbers that get most often thrown around are:Pages should open in under 1 second and so forth There should be an uptime of 99.99% – which is actually 1.01 minutes per weekAnd that’s about it. Of course there are more, but 95% of the conversations revolve around the two here. then we go about designing solutions to meet those numbers. Just before GO Live Things are al good when we are under implementation and we do everything to make sure we meet those 2 or so numbers met with out design. We do performance modeling and then we execute those performance models and prove out that the trafic model/simulation as to what we understand as client use cases work fine. So that is not we say with confidence that our system will meet the performance needs. In this model, we do take a capacity increase of 40% or so; again based on anayltic and some future growth and we incubate those numbers into our calculations and then we are even sure that our system will be able to handle a bit more if that happens at times. Now that we are so sure that performance is all good for the traffic we expect to get we believe that our software should continue to work fine because everything will work in the same constraints in which we have tested it. And because those constraints are well defined there should really be no problem with us meeting our availability numbers. We are in flight Houston Then the next cool thing happens – we go live and our system that we have build with so much pride and caution and we have tested so much is Live and so many people start to use it. It certainly is an exhilarating feeling seeing your sweat and hard work go live and people are seeing and interacting with what you have build. Until a time comes when the system goes down. You would be sleeping in middle of the night and you will get a call and someone will be telling you to get up, switch on your laptop and get ready to debug as to why the system went down and get it up and running quick. It takes you a while to think – what the hell just happened. We did everything we had to do. There were even reviews that we did and everything was looking good. How can it go down. Well there is something called Universe that has a different set of plans for you and those plans just went into motion. So what happens? Indian Railways happens! You realize that there is a traffic pattern that has suddenly come and hit your servers which you did not know. A Chinese search engine has started to crawl all over your site and the site is not even in china. Well it’s a free ride this internet and anyone can get on. Why can some people sitting in a country not see a website that is not supposed to be targeted at them – but we did not have those specifications. We put in all checks but we never anticipated for all those search engines and bots and the crawling they will do. What do we do? We fix the problem and put in either a block to stop that traffic pattern or we throttle it or we add servers to handle it. And then we go about thinking okay now i am good; this is done and dusted and wont happen again. Universe has other plans Next time someone will add some bad servers into WIP and cluster will fail. and the next time someone will delete the database and it will crash again and the next and the next and the next… In the whole software development process we miss this key step – to design for this game and how we will be setup to get the system back up and running in that time frame.Fixing begins Of course we have to do something but what we do is to start looking at our solution and check why performance is bad. Performance – really? We do everything we can do to fix the performance problem but we spend no time on availability aspect. Going back to the India Railways analogy I drew upfront; a train – engine and bogeys are built to handle certain load and they was agreed. It cant be more precise as there are seats and there are tickets that need to be bought to get into that train. As long as the number of people that get in there are within those constraints our problems will be much less. Everything around our software (our train) needs to work in tandem. But, it is difficult to control. Internet is much more wider than an Indian Railways and who comes, when they come and how many come is just not predictable. It becomes important to acknowledge that no matter how you do there will always be a model of traffic that will come and visit you that will take your system outside of the known boundaries and more often than not once our system is operating outside of those boundaries it’s bound to fail some point. This is where the Availability and Resilience perspectives need to be brought into the picture. Next Time do something else too Availability and Resilience At their core this perspective asks you to set some designs, practice and most importantly an expectation with your clients as to what you are dealing with. We all know that in the last decade how we run our business and how we deal with internet hosted sites is very different than how we used to make systems in past. I paraphrase from article – “If your site is down, your business will suffer” and yet everyone will want a 24×7 uptime but yet we sold what NFRs – 99.99 (1.06 minutes) or maybe 99.9999 (0.605 seconds) thinking it’s okay. If the expectation is 24×7 why would we even start with something less? We then need to look at the next 2 most important metrics which we miss all along and we never plan, design or test for. It’s like we take them for granted. It’s the Recovery Time Objective (RTO) and RPO (Recovery Point Objective). As we speak about uptime and outages, whenever we have an unplanned outage and we have promised a certain uptime, we need to have the Operational Ability to be do whatever is needed to get the system up and running. If we designed for 99.99% we need to have methods in place to get system back in 1.06 seconds – it feels like Minute to Win It. In the whole software development process we miss this key step – to design for this game and how we will be setup to get the system back up and running in that time frame. The Operational ViewPoint Operation Viewpoint is a key architectural principle that we omit to design for when we are building software systems and platform. How we run software now has been completely changed in the last decade with the cloud hosting. As cloud makes things so easy to provision and host (AWS) we believe that everything should be easy. So where in the past when we used to focus on Availability design a lot more we almost take it for granted. This Viewpoint is something that should become our bread ad butter during the implementation phase with a dedicated team who is going to look at operational processes and tools and provide methods to make the recovery possible in the time it’s expected to be. Categorize and Prioritize This is where it becomes critical to have a conversation with out clients and understand how various parts of the systems can be identified and broken down into services. A classification of sorts like “Platinum”, “Gold”, “Bronze” starts to make sense to get the business to prioritize as to what services should get the top most priority incase of an unplanned outage. The focus of the operational design and implementation team then needs to focus on how to look at the system and how to make those services up and running quickly. This is a key inputs for the implementation phase because unless those services are not known there won’t be a way those services are coded such. Recover and not debug When these unplanned outages happen, the team which is responsible for managing the system more often than not start with a different mind set. They are like Cops who have reached a crime scene after a crime has happened. They start by looking around for evidence and analyze the crime scene as to what happened. The idea is to look for evidence and then solve the crime and hopefully then find the criminal and put them behind bars. Well we all know it takes so long to get there. With cops, i can see the point – you can’t have cameras in all the home and everywhere, so you will have do post-mortems. But this is a software system and we need to be fire fighters. The idea has to be to put the fire out and do it quickly before it takes the block away. The idea has to be to realize some damage has been done – thats a lost cause; let’s see how we can save what’s left of it. In softwares that we write and platforms that we host we need to have a something I refer to as “Last know good state” and when an outage happens what we need to do it so just ourselves back to that state. But, when do we do when the state or behavior is not under your control. Going to back to Indian Railways, what do you do when you can’t control the number of people who are coming in on the platform and onto your train – they just keep coming in; no matter if you find a way to replace the train, they will keep coming in. The other way is to of course start adding more trains on the platform. With cloud you can do that and keep adding servers until all the traffic is dealt with. This is where we move seamlessly into Performance and Scalability perspective. This is where we lose sight of the problem and we try to fix something else in modern software.So what should do if we can not control traffic. We need an effective mechanism on our train that wont allow everyone to get in. We need to have the ability to know who can get in. We have these ID cards and Turnstil on platform. So if our platforms do not give us those why can we not put those in in our trains. It may not stop all malicious traffic, but it will certainly stop a lot of it. Most importantly you can go back and authorize your Platinum users to get in while you block everyone else. So in software world, you need to have a cutover switch that will stop all traffic and only allow what is key for business. Unless everything that is coming in is Platinum which is not the case most times, you will be able to recover your most important services easy. Of course there is a degradation of other services but that is something you would already set expectations with your clients and the business. They will be mad but less mad. The 300 If you have not seen 300, you have got to see this and learn what it can do to your systems and ability to recover if you handle your enemy (the traffic) though a funnel. You will longer and you will get a lot more time to fight the enemy. On top of it, the pressure and stress that the business creates when their platinum services are not available will also be reduced. You can then go about debugging once you have contained. Nothing is for Free Of course we do so much more, but the more we do the more it will cost. Back to indian railways, we can either chose to save some money on building my train by not installing those ID cards turnstiles or we can invest that money and ensure we have continuity. The Turnstiles will be more and more complex and will need a lot more fine tuning to handle all scenarios. So if you need to also handle for use cases where you don’t want your turnstile to fail, then you need to install 2 of them on all bogeys which will cost more and the setup goes on. The point I am trying to make is that when we go from 99% to 99.99% to 99.999% we dont look at the cost drivers and what it will do to the project. We may think – it would mean a few more rounds of performance testing and we should done. We we know now what’s going to happen. If you fail to articulate to the clients what these numbers will mean to them in terms of cost, you wont ever get them accept the reality of internet and the universe. More often than not you will realize how business will realize that there are services they can live without. Of course, eventually with all many fixes you will eradicate a lot of cases that led to failures but it would have taken you so long and the reputation that the brand holds do dear is already damaged. If you think of this as “risk money” and how you invest in guard your reputation will justify the cost every time – that much i can assure you.Reference: High availability design from our JCG partner Kapil Viren Ahuja at the Scratch Pad blog....
java-logo

JVM PermGen – where art thou?

This post covers some basics of JVM memory structure and quickly peeks into PermGen to find out where it has disappeared since advent of Java SE 8. Bare Basics The JVM is just another process running on your system and the magic begins with the java command. Like any OS process, it needs memory for its run time operations. Remember – the JVM itself is a software abstraction of a hardware on top of which Java programs run and boast of OS independence and WORA (write once run anywhere).   Quick coverage of the JVM memory structure As per the spec, JVM is divided into 5 virtual memory segments.Heap Method (non heap) JVM Stack Native Stack PC RegistersHeapEvery object allocated in your Java program requires to be stored in the memory. The heap is the area where all the instantiated objects get stored. Yes – blame the new operator for filling up your Java heap! Shared by all threads The JVM throws java.lang.OutOfMemoryError when it’s exhausted Use the -Xms and -Xmx JVM options to tune the Heap sizeSub-divided intoEden (Young) – New object or the ones with short life expectancy exist in this area and it is regulated using the -XX:NewSize and -XX:MaxNewSize parameters. GC (garbage collector) minor sweeps this space Survivor – The objects which are still being referenced manage to survive garbage collection in the Eden space end up in this area. This is regulated via the -XX:SurvivorRatio JVM option Old (Tenured) – This is for objects which survive long garbage collections in both the Eden and Survivor space (due to lingering references of course). A special garbage collector takes care of this space. Object de-alloaction in the tenured space is taken care of by GC majorMethod AreaAlso called the non heap area (in HotSpot JVM implementation) It is divided into 2 major sub spacesPermanent Generation – This area stores class related data from class definitions, structures, methods, field, method (data and code) and constants. Can be regulated using -XX:PermSize and -XX:MaxPermSize. IT can cause java.lang.OutOfMemoryError: PermGen space if it runs out if space. Code Cache – The cache area is used to store compiled code. The compiled code is nothing but native code (hardware specific) and is taken care of by the JIT (Just In Time) compiler which is specific to the Oracle HotSpot JVM. JVM StackHas a lot to do with methods in the Java classes Stores local variables and regulates method invocation, partial result and return values Each thread in Java has its own (private) copy of the stack and is not accessible to other threads. Tuned using -Xss JVM optionNative StackUsed for native methods (non Java code) Per thread allocationPC RegistersProgram counter specific to a particular thread Contains addresses for JVM instructions which are being exceuted (undefined in case of native methods)So, that’s about it for the JVM memory segment basics. Coming to back to the Permanent Generation. So where is PermGen ??? Essentially, the PermGen has been completely removed and replaced by another memory area known as the Metaspace. Metaspace – quick factsIt’s part of the native heap memory Can be tuned using -XX:MetaspaceSize and -XX:MaxMetaspaceSize Clean up initiation driven by XX:MetaspaceSize option i.e. when the MetaspaceSize is reached. java.lang.OutOfMemoryError: Metadata space will be received if the native space is exhausted The PermGen related JVM options i.e. -XX:PermSize and -XX:MaxPermSize will be ignored if presentThis was obviously just the tip of the iceberg. For comprehensive coverage of the JVM, there is no reference better than the specification itself ! You can also exploreThe Java Language Specification What’s new in Java 8 ?Cheers !!!Reference: JVM PermGen – where art thou? from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
java-logo

Caveats of HttpURLConnection

Does this piece of code look ok to you?                     HttpURLConnection connection = null; try { connection = (HttpURLConnection) url.openConnection(); try (InputStream in = url.getInputStream()) { return streamToString(in); } } finally { if (connection != null) connection.disconnect(); } Looks good – it opens a connection, reads from it, closes the input stream, releases the connection, and that’s it. But while running some performance tests, and trying to figure out a bottleneck issue, we found out that disconnect() is not as benign as it seems – when we stopped disconnecting our connections, there were twice as many outgoing connections. Here’s the javadoc: Indicates that other requests to the server are unlikely in the near future. Calling disconnect() should not imply that this HttpURLConnection instance can be reused for other requests.And on the class itslef: Calling the disconnect() method may close the underlying socket if a persistent connection is otherwise idle at that time.This is still unclear, but gives us a hint that there’s something more. After reading a couple of stackoverflow and java.net answers (1, 2, 3, 4) and also the android documentation of the same class, which is actually different from the Oracle implementation, it turns out that .disconnect() actually closes (or may close, in the case of android) the underlying socket. Then we can find this bit of documentation (it is linked in the javadoc, but it’s not immediately obvious that it matters when calling disconnect), which gives us the whole picture: The keep.alive property (default: true) indicates that sockets can be reused by subsequent requests. That works by leaving the connection to the server (which supports keep alive) open, and then the overhead of opening a socket is no longer needed. By default, up to 5 such sockets are reused (per destination). You can increase this pool size by setting the http.maxConnections property. However, after increasing that to 10, 20 and 50, there was no visible improvement in the number of outgoing requests. However, when we switched from HttpURLConnection to apache http client, with a pooled connection manager, we had 3 times more outgoing connections per second. And that’s without fine-tuning it. Load testing, i.e. bombarding a target server with as many requests as possible, sounds like a niche use-case. But in fact, if your application invokes a web service, either within your stack, or an external one, as part of each request, then you have the same problem – you will be able to make fewer requests per second to the target server, and consequently, respond to fewer requests per second to your users. The advice here is: almost always prefer apache http client – it has a way better API and it seems way better performance, without the need to understand how exactly it functions underneath. But be careful of the same caveats there as well – check pool size and connection reuse. If using HttpURLConnection, do not disconnect your connections after you read their response, consider increasing the socket pool size, and be careful of related problems.Reference: Caveats of HttpURLConnection from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
agile-logo

The Hidden Cost Of Estimation

“Why would you want a rough estimate, when I can do a more precise one?” And really, if we can do something better, why do it half way? There’s a simple answer, but I’ll give it after the long detailed one. Let’s start by asking again:   Why estimate at all? There’s a whole #NoEstimates discussion, whether we need estimations or not. Unless your organization is mature enough to handle the truth, someone will want an estimation, believing he can do something with it: approve the task, delay it, budget for it, plan subsequent operations. That someone needs information to make decisions, and is basing them on the numbers we give. In reality, unless there are orders of magnitude between expected results in estimation it wouldn’t matter. If we had a deadline for 6 months, and the estimation is 8 months, the project will probably be approved, knowing that we can remove  scope from it. If we estimated a project will take a year, there’s going to be a 3 month buffer between it and the next one, because “we know how it works”. Things usually go forward regardless of our estimation. If however, we estimate we need 5 times the budget than what we thought we needed, this may cause the project to cancel. In summary, the upfront estimation serves making decision. In fact, if you just go with the discussion, and leave the number out, you can reach the same decisions. So why do we need the numbers? Numbers are good proxies. They are simple, manageable, and we can draw wonderful graphs with them. The fact they are wrong, or can be right in very small number of cases is really irrelevant because we like numbers. Still, someone high up asked for them, shouldn’t we give them the best answer we can? Indeed we should. But we need to define what is the “best answer”, and how we get it. How do we estimate? How do we get to “it will take 3 months” answer? We rely on past experience. We apply our experience, hopefully, or someone else’s experience to compare similar projects from our past to the ones we’re estimating. We may have even collected data so our estimates are not based on our bad memory. Software changes all the time, so even past numbers should be modified. We don’t know how to factor in the things we don’t know how to do, or the “unknown unknowns” that will bite us, so we multiply it by a factor until a consensus is reached. We forget stuff, we assume stuff, but in the end we get to the “3 months” answer we can live with. Sometimes. How about the part we do know about – we can estimate that one more precisely. We can break it down to design, technology, risk and estimate it “better”. We can. But there’s a catch. Suppose after we finished the project, we find that 30% of it, included the “unknown unknowns” stuff.  We could have estimated the other 70% very precisely, but the whole estimation would still be volatile. (I’m being very conservative here, the “unknown unknowns” at the time of estimation is what makes most of a project). The simple answer So here is what we know:Estimation is mostly wrong People still want them It takes time to estimate Precise estimation costs more Precise and rough estimation have the same statistical meaning because of unknowns.That means that we need “good enough” estimates. These are the ones that cost less, and give a good enough, trusted basis for decision for the people who ask for it. Fredkin’s Paradox talks about how the closer the options we need to decide between, it takes longer for us to decide, while the difference in impact of choosing between the two becomes negligible. Effective estimation recognizes the paradox, and tries the fight it: because the impact of variations in the estimates, there’s no need to further deliberate them. If you get to the same quality of answer, you should go for the cheaper option. Precise estimates are costly, and you won’t get a benefit from making them more precise. In fact, as a product manager, I wouldn’t ask for precise estimates, because they cost me money and time not being spent on actual delivery. Working software over comprehensive documentation, remember?Reference: The Hidden Cost Of Estimation from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
agile-logo

The HVT Analysis Approach

In my career it took me some time to understand and be convinced of the importance of doing analysis. I still remember my first job experience, I just wanted to quickly write some code and refactor it n-times to get better results. Did not you? Things are different today and I am writing this post to share with you my personal approach to analysis. It is not something new, it is just my tailored method based on experience and well known methodologies. In general you get as input a problem domain (the observed system with all the entities and rules involved) and must produce as output a solution domain (the designed system that solve the original analysis problem). So a good start is to:Get a clear view of the problematic to solve, Reduce the problem domain to a minimum knowledge spaceIf I should quickly define Analysis I would say that it is about finding details, categorizing knowledge and solving problems. The HVT approach is made of three steps:Horizontal Analysis (Layers) Vertical Analysis (Integration) Transverse Analysis (Influences)To briefly describe the method I will use examples; those will be based on a quite common case study, that is building an house. In this case study our goal is to identify all the necessary requirements that are needed by an architect to design the building. Phase 1: The Horizontal Analysis Horizontally we search for common aspects, grouping them in named Layers with minimal coupling between them (and ideally no cyclic dependencies). The scope is to detect all the functional requirements and have a good vision of their functional context and boundaries. In our example we could define the following layers:Geography: The ground has to be solid; It must be far from floods; Must be close to green areas; Must be close to schools; … Infrastructure: Has to be connected to electricity and water provider; Has to be connected to city sewers; Must have a very fast internet connection; … Technology: Most devices must be remotely controllable; Doors have to open with a human recognition technology; Must produce solar electricity; … Security & Ergonomics: A modern alarm system has to be installed; Interior design has to be safe for babies; It must be accessible for old people; …  Phase 2: The Vertical Analysis Vertically we study Integration. Integration means putting things to work together; If things are intrinsically interoperable then integration efforts will be minimized. And this is our scope.  For this step we choose profiles and we do analysis by formulating integration questions. In our example:Adult: Will an adult have easily access to all the necessary devices? Does furniture fits well for him? Is the kitchen comfortable?… Baby: Are stairs well protected? Will he have enough space to play? Is the chosen floor easy to clean?… Apartment: Does the apartment easily access infrastructure services? Does it has an homogeneous design? Does it has enough light? …A negative answer probably means a loop back in previous phase to make some adaptations to increase interoperability.      Phase 3: The Transverse Analysis The last step is the more complicated, and not always needed. Its purpose is to study the indirect influences of different layers spanning between different profiles.  As for phase 2 we do it by analyzing the ongoing model and formulating questions. In our example some questions could be:Will the WI-FI required by an adult be dangerous for a baby? Maybe it will be better to have less signal power in his sleeping room. Will be an adult able to sleep when his child is playing an electric guitar? Maybe the child room should be acoustically isolated.This process is sometime difficult to solve, because it will surely bring you to find conflicts between profiles that sometime do not want to (or simply cannot) renounce to their needs.   Conclusion Even if the study case was very simple I hope you get an idea of what it means to move it to more complex domains. For example in Software Architecture clients are stakeholders, needs are functional and non functional requirements, profiles are applications or components, and an indirect influence can be a compatibility matrix of the used technologies. It is not easy to explain this approach in few lines, there are thousand of words I missed so if you have questions or any advice do not hesitate to leave your comment and share it.Reference: The HVT Analysis Approach from our JCG partner Marco Di Stefano at the Refactoring Ideas blog....
java-logo

Name of the class

In Java every class has a name. Classes are in packages and this lets us programmers work together avoiding name collision. I can name my class A and you can also name your class A so long as long they are in different packages, they work together fine. If you looked at the API of the class Class you certainly noticed that there are three different methods that give you the name of a class:        getSimpleName() gives you the name of the class without the package. getName() gives you the name of the class with the full package name in front. getCanonicalName() gives you the canonical name of the class.Simple is it? Well, the first is simple and the second is also meaningful unless there is that disturbing canonical name. That is not evident what that is. And if you do not know what canonical name is, you may feel some disturbance in the force of your Java skills for the second also. What is the difference between the two? If you want a precise explanation, visit the chapter 6.7 of Java Language Specification. Here we go with something simpler, aimed simpler to understand though not so thorough. Let’s see some examples: package pakage.subpackage.evensubberpackage; import org.junit.Assert; import org.junit.Test;public class WhatIsMyName { @Test public void classHasName() { final Class<?> klass = WhatIsMyName.class; final String simpleNameExpected = "WhatIsMyName"; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName"; Assert.assertEquals(nameExpected, klass.getName()); Assert.assertEquals(nameExpected, klass.getCanonicalName()); } ... This “unit test” just runs fine. But as you can see there is no difference between name and canonical name in this case. (Note that the name of the package is pakage and not package. To test your java lexical skills answer the question why?) Let’s have a look at the next example from the same junit test file: @Test public void arrayHasName() { final Class<?> klass = WhatIsMyName[].class; final String simpleNameExpected = "WhatIsMyName[]"; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "[Lpakage.subpackage.evensubberpackage.WhatIsMyName;"; Assert.assertEquals(nameExpected, klass.getName()); final String canonicalNameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName[]"; Assert.assertEquals(canonicalNameExpected, klass.getCanonicalName()); } Now there are differences. When we talk about arrays the simple name signals it appending the opening and closing brackets, just like we would do in Java source code. The “normal” name looks a bit weird. It starts with an L and semicolon is appended. This reflects the internal representation of the class names in the JVM. The canonical name changed similar to the simple name: it is the same as before for the class having all the package names as prefix with the brackets appended. Seems that getName() is more the JVM name of the class and getCanonicalName() is more like the fully qualified name on Java source level. Let’s go on with still some other example (we are still in the same file): class NestedClass{} @Test public void nestedClassHasName() { final Class<?> klass = NestedClass.class; final String simpleNameExpected = "NestedClass"; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName$NestedClass"; Assert.assertEquals(nameExpected, klass.getName()); final String canonicalNameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName.NestedClass"; Assert.assertEquals(canonicalNameExpected, klass.getCanonicalName()); } The difference is the dollar sign in the name of the class. Again the “name” is more what is used by the JVM and canonical name is what is Java source code like. If you compile this code, the Java compiler will generate the files:WhatIsMyName.class and WhatIsMyName$NestedClass.classEven though the class is named nested class it actually is an inner class. However in the naming there is no difference: a static or non-static class inside another class is just named the same. Now let’s see something even more interesting: @Test public void methodClassHasName() { class MethodClass{}; final Class<?> klass = MethodClass.class; final String simpleNameExpected = "MethodClass"; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName$1MethodClass"; Assert.assertEquals(nameExpected, klass.getName()); final String canonicalNameExpected = null; Assert.assertEquals(canonicalNameExpected, klass.getCanonicalName()); } This time we have a class inside a method. Not a usual scenario, but valid from the Java language point of view. The simple name of the class is just that: the simple name of the class. No much surprise. The “normal” name however is interesting. The Java compiler generates a JVM name for the class and this name contains a number in it. Why? Because nothing would stop me having a class with the same name in another method in our test class and inserting a number is the way to prevent name collisions for the JVM. The JVM does not know or care anything about inner and nested classes or classes defined inside a method. A class is just a class. If you compile the code you will probably see the file WhatIsMyName$1MethodClass.class generated by javac. I had to add “probably” not because I count the possibility of you being blind, but rather because this name is actually the internal matter of the Java compiler. It may choose different name collision avoiding strategy, though I know no compiler that differs from the above. The canonical name is the most interesting. It does not exist! It is null. Why? Because you can not access this class from outside the method defining it. It does not have a canonical name. Let’s go on. What about anonymous classes. They should not have name. After all, that is why they are called anonymous. @Test public void anonymousClassHasName() { final Class<?> klass = new Object(){}.getClass(); final String simpleNameExpected = ""; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName$1"; Assert.assertEquals(nameExpected, klass.getName()); final String canonicalNameExpected = null; Assert.assertEquals(canonicalNameExpected, klass.getCanonicalName()); } Actually they do not have simple name. The simple name is empty string. They do, however have name, made up by the compiler. Poor javac does not have other choice. It has to make up some name even for the unnamed classes. It has to generate the code for the JVM and it has to save it to some file. Canonical name is again null. Are we ready with the examples? No. We have something simple (a.k.a. primitive) at the end. Java primitives. @Test public void intClassHasName() { final Class<?> klass = int.class; final String intNameExpected = "int"; Assert.assertEquals(intNameExpected, klass.getSimpleName()); Assert.assertEquals(intNameExpected, klass.getName()); Assert.assertEquals(intNameExpected, klass.getCanonicalName()); } If the class represents a primitive, like int (what can be simpler than an int?) then the simple name, “the” name and the canonical names are all int the name of the primitive. Just as well an array of a primitive is very simple is it? @Test public void intArrayClassHasName() { final Class<?> klass = int[].class; final String simpleNameExpected = "int[]"; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "[I"; Assert.assertEquals(nameExpected, klass.getName()); final String canonicalNameExpected = "int[]"; Assert.assertEquals(canonicalNameExpected, klass.getCanonicalName()); } Well, it is not simple. The name is [I, which is a bit mysterious unless you read the respective chapter of the JVM specification. Perhaps I talk about that another time. Conclusion The simple name of the class is simple. The “name” returned by getName() is the one interesting for JVM level things. The getCanonicalName() is the one that looks most like Java source.You can get the full source code of the example above from the gist e789d700d3c9abc6afa0 from GitHub.Reference: Name of the class from our JCG partner Peter Verhas at the Java Deep blog....
java-logo

Typical Mistakes in Java Code

This page contains most typical mistakes I see in the Java code of people working with me. Static analysis (we’re using qulice can’t catch all of the mistakes for obvious reasons, and that’s why I decided to list them all here. Let me know if you want to see something else added here, and I’ll be happy to oblige. All of the listed mistakes are related to object-oriented programming in general and to Java in particular.       Class Names Read this short “What is an Object?” article. Your class should be an abstraction of a real life entity with no “validators”, “controllers”, “managers”, etc. If your class name ends with an “-er” — it’s a bad design. And, of course, utility classes are anti-patterns, like StringUtils, FileUtils, and IOUtils from Apache. The above are perfect examples of terrible designs. Read this follow up post: OOP Alternative to Utility Classes. Of course, never add suffixes or prefixes to distinguish between interfaces and classes. For example, all of these names are terribly wrong: IRecord, IfaceEmployee, or RecordInterface. Usually, interface name is the name of a real-life entity, while class name should explain its implementation details. If there is nothing specific to say about an implementation, name it Default, Simple, or something similar. For example: class SimpleUser implements User {}; class DefaultRecord implements Record {}; class Suffixed implements Name {}; class Validated implements Content {}; Method Names Methods can either return something or return void. If a method returns something, then its name should explain what it returns, for example (don’t use the get prefix ever): boolean isValid(String name); String content(); int ageOf(File file); If it returns void, then its name should explain what it does. For example: void save(File file); void process(Work work); void append(File file, String line); There is only one exception to the rule just mentioned — test methods for JUnit. They are explained below. Test Method Names Method names in JUnit tests should be created as English sentences without spaces. It’s easier to explain by example: /** * HttpRequest can return its content in Unicode. * @throws Exception If test fails */ public void returnsItsContentInUnicode() throws Exception { } It’s important to start the first sentence of your JavaDoc with the name of the class you’re testing followed by can. So, your first sentence should always be similar to “somebody can do something”. The method name will state exactly the same, but without the subject. If I add a subject at the beginning of the method name, I should get a complete English sentence, as in above example: “HttpRequest returns its content in unicode”. Pay attention that the test method doesn’t start with can.Only JavaDoc comments start with ‘can.’ Additionally, method names shouldn’t start with a verb. It’s a good practice to always declare test methods as throwing Exception. Variable Names Avoid composite names of variables, like timeOfDay, firstItem, or httpRequest. I mean with both — class variables and in-method ones. A variable name should be long enough to avoid ambiguity in its scope of visibility, but not too long if possible. A name should be a noun in singular or plural form, or an appropriate abbreviation. For example: List<String> names; void sendThroughProxy(File file, Protocol proto); private File content; public HttpRequest request; Sometimes, you may have collisions between constructor parameters and in-class properties if the constructor saves incoming data in an instantiated object. In this case, I recommend to create abbreviations by removing vowels (see how USPS abbreviates street names). Another example: public class Message { private String recipient; public Message(String rcpt) { this.recipient = rcpt; } } In many cases, the best hint for a name of a variable can ascertained by reading its class name. Just write it with a small letter, and you should be good: File file; User user; Branch branch; However, never do the same for primitive types, like Integer number or String string. You can also use an adjective, when there are multiple variables with different characteristics. For instance: String contact(String left, String right); Constructors Without exceptions, there should be only one constructor that stores data in object variables. All other constructors should call this one with different arguments. For example: public class Server { private String address; public Server(String uri) { this.address = uri; } public Server(URI uri) { this(uri.toString()); } } One-time Variables Avoid one-time variables at all costs. By “one-time” I mean variables that are used only once. Like in this example: String name = "data.txt"; return new File(name); This above variable is used only once and the code should be refactored to: return new File("data.txt"); Sometimes, in very rare cases — mostly because of better formatting — one-time variables may be used. Nevertheless, try to avoid such situations at all costs. Exceptions Needless to say, you should never swallow exceptions, but rather let them bubble up as high as possible. Private methods should always let checked exceptions go out. Never use exceptions for flow control. For example this code is wrong: int size; try { size = this.fileSize(); } catch (IOException ex) { size = 0; } Seriously, what if that IOException says “disk is full”? Will you still assume that the size of the file is zero and move on? Indentation For indentation, the main rule is that a bracket should either end a line or be closed on the same line (reverse rule applies to a closing bracket). For example, the following is not correct because the first bracket is not closed on the same line and there are symbols after it. The second bracket is also in trouble because there are symbols in front of it and it is not opened on the same line: final File file = new File(directory, "file.txt"); Correct indentation should look like: StringUtils.join( Arrays.asList( "first line", "second line", StringUtils.join( Arrays.asList("a", "b") ) ), "separator" ); The second important rule of indentation says that you should put as much as possible on one line – within the limit of 80 characters. The example above is not valid since it can be compacted: StringUtils.join( Arrays.asList( "first line", "second line", StringUtils.join(Arrays.asList("a", "b")) ), "separator" ); Redundant Constants Class constants should be used when you want to share information between class methods, and this information is a characteristic (!) of your class. Don’t use constants as a replacement of string or numeric literals — very bad practice that leads to code pollution. Constants (as with any object in OOP) should have a meaning in a real world. What meaning do these constants have in the real world: class Document { private static final String D_LETTER = "D"; // bad practice private static final String EXTENSION = ".doc"; // good practice } Another typical mistake is to use constants in unit tests to avoid duplicate string/numeric literals in test methods. Don’t do this! Every test method should work with its own set of input values. Use new texts and numbers in every new test method. They are independent. So, why do they have to share the same input constants? Test Data Coupling This is an example of data coupling in a test method: User user = new User("Jeff"); // maybe some other code here MatcherAssert.assertThat(user.name(), Matchers.equalTo("Jeff")); On the last line, we couple "Jeff" with the same string literal from the first line. If, a few months later, someone wants to change the value on the third line, he/she has to spend extra time finding where else "Jeff" is used in the same method. To avoid this data coupling, you should introduce a variable. Related Posts You may also find these posts interesting:Why NULL is Bad? Objects Should Be Immutable OOP Alternative to Utility Classes Avoid String Concatenation Simple Java SSH ClientReference: Typical Mistakes in Java Code from our JCG partner Yegor Bugayenko at the About Programming blog....
agile-logo

Cross-dysfunctional teams

Every agile enthusiast will tell you how powerful a self-empowered cross-functional team can be. Once you have one, it brings complete team accountability from product idea to customer support, it naturally grows with continuous improvement, and finds self motivation in innovation and delivery of customer value. It’s a beautiful and powerful concept; the practical implementation sometimes is not so beautiful and more often than not what you get, is a cross-dysfunctional team. Let’s have a look at the cross-dysfunctional examples I have experienced.  Pseudo specialist cross-dysfunctional team Developer: “I am a developer I am not meant to test, the testers test!” Tester: “I don’t need to know anything about how the product is designed, I only care about how the customers use it!” Business Analyst: “I am not technical, I can’t help you guys!” “As Long As It Works for us” cross-dysfunctional team Developer: “It works in our environments, it’s operations responsibility to make it work in production” Tester: ”Listen, it worked in UAT, it must be a configuration issue, or a missing firewall hole and nothing I could have spotted during testing…” Customer: “Hello! Nothing works here…”   Abdicating cross-dysfunctional team Developer: “The architect told me to do it like this” Tester: “Feck it, let the Test manager deal with it” Business Analyst: “I don’t think there is any value in this story, but the Product Owner wants it, so get on with it and develop it!”   Continuous Decline cross-dysfunctional team Developer: “No point in doing retrospectives, things are always the same” Tester: “We DON’T HAVE TIME to try new things!” Business Analyst: “We do it like this, because that’s how we do things here, and that’s it!”   Disintegrated cross-dysfunctional team Developer: “My code works perfectly, it’s their system that doesn’t work, who cares” Tester: “We have 100% coverage, all our tests pass and we have no bugs, if the developers of system X are idiots, there is nothing we can do about it” Customer: “And still, nothing works here…”     Nazi cross dysfunctional team Developer: “Testers are failed programmers, they shouldn’t be called engineers” Tester: “Developers are only able to produce bugs, the world would be better with more testers and less developers” Business Analysts: “I don‘t know why I bother talking to testers and developers, they are total idiots” Do you recognise your team in one of the categories above? What have you done until now to help your team change? Little? Nothing? But you are still bitching about it, aren’t you? Remember, you are very powerful and can become the change that you want to see.Reference: Cross-dysfunctional teams from our JCG partner Augusto Evangelisti at the mysoftwarequality blog....
akka-logo

Monitoring Akka with Kamon

I like the JVM a lot because there are a lot of tools available for inspecting a running JVM instance at runtime. The Java Mission Control (jmc) is one of my favorite tools, when it comes to monitor threads, hot methods and memory allocation. However these tools are of limited use, when monitoring an event-driven, message-based system like Akka. A thread is almost meaningless as it could have processed any kind of message. Luckly there are some tools out there to fill this gap. Even though the Akka docs are really extensive and useful, there isn’t a lot about monitoring. I’m more a Dev than a Ops guy, so I will only give a brief and “I thinks it does this” introduction to the monitoring-storage-gathering-displaying-stuff. The Big Picture First of all, when we are done we will have this infrastructure runningThanks to docker we don’t have to configure anything on the right hand-side to get started. Kamon Starting on the left of the picture. Kamon is a library which uses AspectJ to hook into methods calls made by the ActorSystem and record events of different types. The Kamon docs have some big gaps, but you can get a feeling of what is possible. I will not make any special configuration and just use the defaults to get started as fast as possible. StatsD – Graphite A network daemon that runs on the Node.js platform and listens for statistics, like counters and timers, sent over UDP and sends aggregates to one or more pluggable backend services. Kamon provides also other backends (datadog, newrelic) to report to. For this tutorial we stick with the free StatsD server and Graphite as Backend Service. Grafana Grafana is a frontend for displaying your stats logged to Graphite. You have a nice Demo you can play around with. However I will give a detailed instruction on how to add your metrics in our Grafana dashboard. Getting started First we need an application we can monitor. I’m using my akka-kamon-activator. Checkout the code: git clone git@github.com:muuki88/activator-akka-kamon.git The application contains two message generators: one for peaks and one for constant load. Two types of actors handle these messages. One creates random numbers and the child actors calculate the prime factors. Kamon Dependencies and sbt-aspectj First we add the kamon dependencies via val kamonVersion = "0.3.4"   libraryDependencies ++= Seq( "com.typesafe.akka" %% "akka-actor" % "2.3.5", "io.kamon" %% "kamon-core" % kamonVersion, "io.kamon" %% "kamon-statsd" % kamonVersion, "io.kamon" %% "kamon-log-reporter" % kamonVersion, "io.kamon" %% "kamon-system-metrics" % kamonVersion, "org.aspectj" % "aspectjweaver" % "1.8.1" ) Next we configure the sbt-aspectj-plugin to weave our code at compile time. First add the plugin to your plugins.sbt addSbtPlugin("com.typesafe.sbt" % "sbt-aspectj" % "0.9.4") And now we configure it aspectjSettings   javaOptions <++= AspectjKeys.weaverOptions in Aspectj   // when you call "sbt run" aspectj weaving kicks in fork in run := true Last step is to configure what should be recorded. Open up your application.conf where your akka configuration resides. Kamon uses the kamon configuration key. kamon {   # What should be recorder metrics { filters = [ { # actors we should be monitored actor { includes = [ "user/*", "user/worker-*" ] # a list of what should be included excludes = [ "system/*" ] # a list of what should be excluded } },   # not sure about this yet. Looks important { trace { includes = [ "*" ] excludes = [] } } ] }   # ~~~~~~ StatsD configuration ~~~~~~~~~~~~~~~~~~~~~~~~   statsd { # Hostname and port in which your StatsD is running. Remember that StatsD packets are sent using UDP and # setting unreachable hosts and/or not open ports wont be warned by the Kamon, your data wont go anywhere. hostname = "127.0.0.1" port = 8125   # Interval between metrics data flushes to StatsD. It's value must be equal or greater than the # kamon.metrics.tick-interval setting. flush-interval = 1 second   # Max packet size for UDP metrics data sent to StatsD. max-packet-size = 1024 bytes   # Subscription patterns used to select which metrics will be pushed to StatsD. Note that first, metrics # collection for your desired entities must be activated under the kamon.metrics.filters settings. includes { actor = [ "*" ] trace = [ "*" ] dispatcher = [ "*" ] }   simple-metric-key-generator { # Application prefix for all metrics pushed to StatsD. The default namespacing scheme for metrics follows # this pattern: # application.host.entity.entity-name.metric-name application = "yourapp" } } } Our app is ready to run. But first, we deploy our monitoring backend. Monitoring Backend As we saw in the first picture, we need a lot of stuff running in order to store our log events. The libraries and components used are most likely reasonable and you (or the more Ops than Dev guy) will have to configure it. But for the moment we just fire them up all at once in a simple docker container. I don’t put them in detached mode so I see what’s going on. docker run -v /etc/localtime:/etc/localtime:ro -p 80:80 -p 8125:8125/udp -p 8126:8126 -p 8083:8083 -p 8086:8086 -p 8084:8084 --name kamon-grafana-dashboard muuki88/grafana_graphite:latest My image is based on a fork from the original docker image by kamon. Run and build the Dashboard Now go to your running Grafana instance at localhost. You see a default, which we will use to display the average time-in-mailbox. Click on the title of the graph ( First Graph (click title to edit ). Now select the metrics like this:And that’s it!Reference: Monitoring Akka with Kamon from our JCG partner Nepomuk Seiler at the mukis.de blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close