Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Complexity is the Excuse

When I speak to people about how it is possible to continuously deliver customer value with near zero issues, I usually get laughed at. After I tell them that there is nothing to laugh at, people start challenging me on how I integrate with other systems, how I manage defects, how I deal with changing requirements, how I manage regression issues, and how I cope with context switching and other similar issues. Luckily enough I have empirical data and my answers are based on experience and not some book or model I read about somewhere, so after a while I manage to get people moving from laughing at me to at least starting to be curious. It has to be said, people become curious but deep inside they still don’t believe me and still think I am a big fool. How could I blame them? I would have done the same a few years back. I know that people need to have to prove it for themselves to be able to believe it, I have no problem with that. While I manage to explain my way of dealing with defects, changing requirements, regression, and context switching, until now I haven’t been able to answer the biggest question of them all, the one that every conversation ends up with eventually: how do you deal with extremely complex systems that need to scale up? I have been thinking about this for a while now and the more I think about it the more I become convinced that Complexity is the Excuse. Complexity exists when we are not able to prioritise the value to deliver (we don’t know what we really want). Complexity exist when we are not able to understand and describe the system we are building. And finally, Complexity is a nice excuse for not doing our job properly as software engineers and having something to blame.Reduce Complexity and stop taking excuses:You want to deliver customer value, OK. You don’t need to deliver everything on day 1. Sit down with your business partners and identify the highest value added feature and focus on that one. If asked for bells and whistles, say, “we will do it later and only if we really need to”, chances are when you are finished doing the first feature you will have learned that you need something different anyway. When deciding how to implement such feature, look for the simplest and cheapest solution, do NOT future proof. By future proofing you will be adding complexity and guess what? YAGNI. Measure the success of your feature and feel free to change direction, failure is learning. Once you have identified the value to be delivered, make sure you break down its own complexity. If a user story or unit of work or whatever you call it has more than one happy path, then it is too complex, break it down into 2 or more units of work. If you start working on something and you discover it is more complex than you had thought, then stop and break it down to less complex units, if you keep on going saying nothing, you will hide complexity and sooner or later you are bound to mess it up. Scaling up is the wrong answer to the false complexity question. Chances are you don’t need to scale up at all: Read 1 and 2 again and again, you will find out that you don’t need as many resources as you thought you would. Scaling up, most of the times, is the easiest, most expensive and laziest approach to fight complexity. For doing 1. and 2. in a structured manner I strongly recommend an approach called Impact Mapping devised by Gojko Adzic, it works. For doing 3. click here For doing 4. use your head. TL;DR: stop blaming complexity when you don’t understand what you are building.Reference: Complexity is the Excuse from our JCG partner Augusto Evangelisti at the mysoftwarequality blog....
software-development-2-logo

Writing Tests for Data Access Code – Green Build Is Not Good Enough

The first thing that we have to do before we can start writing integration tests for our data access code is to decide how we will configure our test cases. We have two options: the right one and wrong one. Unfortunately many developers make the wrong choice. How can we avoid making the same mistake? We can make the right decisions by following these three rules:     Rule 1: We Must Test Our Application This rule seems obvious. Sadly, many developers use a different configuration in their integration tests because it makes their tests pass. This is a mistake! We should ask ourselves this question: Do we want to test that our data access code works when we use the configuration that is used in the production environment or do we just want that our tests pass? I think that the answer is obvious. If we use a different configuration in our integration tests, we are not testing how our data access code behaves in the production environment. We are testing how it behaves when we run our integration tests. In other words, we cannot verify that our data access code works as expected when we deploy our application to the production environment. Does this sound like a worthy goal? If we want to test that our data access code works when we use the production configuration, we should follow these simple rules:We should configure our tests by using the same configuration class or configuration file which configures the persistence layer of our application. Our tests should use the same transactional behavior than our application.These rules have two major benefits:Because our integration tests use exactly the same configuration than our application and share the same transactional behavior, our tests help us to verify that our data access code is working as expected when we deploy our application to the production environment. We don’t have to maintain different configurations. In other words, if we make a change to our production configuration, we can test that the change doesn’t break anything without making any changes to the configuration of our integration tests.Rule 2: We Can Break Rule One There are no universal truths in software development. Every principle rule is valid only under certain conditions. If the conditions change, we have to re-evaluate these principles. This applies to the first rule as well. It is a good starting point, but sometimes we have to break it. If we want to introduce a test specific change to our configuration, we have to follow these steps:Figure out the reason of the change. List the benefits and drawbacks of the change. If the benefits outweigh the drawbacks, we are allowed to change the configuration of our tests. Document the reason why this change was made. This crucial because it gives us the possibility to revert that change if we find out that making it was a bad idea.For example, we want to run our integration tests against an in-memory database when these tests are run in a development environment (aka developer’s personal computer) because this shortens the feedback loop. The only drawback of this change is that we cannot be 100% sure that our code works in the production environment because it uses a real database. Nevertheless, the benefits of this change outweigh its drawbacks because we can (and we should) still run our integration tests against a real database. A good way to do this is to configure our CI server to run these tests. This is of course a very simple (and maybe a bit naive) example and often the situations we face are much more complicated. That is why we should follow this guideline: If in doubt, leave test config out. Rule 3: We Must Not Write Transactional Integration Tests One of the most dangerous mistakes that we can make is to modify the transactional behavior of our application in our integration tests. If we make our tests transactional, we ignore the transaction boundary of our application and ensure that the tested code is executed inside a transaction. This is extremely harmful because it only helps us to hide the possible errors instead of revealing them. If you want to know how transactional tests can ruin the reliability of your test suite, you should read a blog post titled: Spring pitfalls: transactional tests considered harmful by Tomasz Nurkiewicz. It provides many useful examples about the errors which are hidden if you write transactional integration tests. Once again we have to ask ourselves this question: Do we want to test that our data access code works when we use the configuration that is used in the production environment or do we just want that our tests pass? And once again, the answer is obvious. Summary This blog post has taught use three things:Our goal is not to verify that our data access code is working correctly when we run our tests. Our goal is to ensure that it is working correctly when our application is deployed to the production environment. Every test specific change creates a difference between our test configuration and production configuration. If this difference is too big, our tests are useless. Transactional integration tests are harmful because they ignore the transactional behavior of our application and hides errors instead of revealing them.That is a pretty nice summary. We did indeed learn those things, but we learned something much more important as well. The most important thing we learned from this blog post is this question: Do we want to test that our data access code works when we use the configuration that is used in the production environment or do we just want that our tests pass? If we keep asking this question, the rest should be obvious to us.Reference: Writing Tests for Data Access Code – Green Build Is Not Good Enough from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
software-development-2-logo

The 4 Levels of Freedom For Software Developers

For quite some time now I’ve been putting together, in my mind, what I think are the four distinct levels that software developers can go through in trying to gain their “freedom.” For most of my software development career, when I worked for a company, as an employee, I had the dream of someday being free. I wanted to be able to work for myself. To me, that was the ultimate freedom.But, being naive, as I was, I didn’t realize that there were actually different levels of “working for yourself.” I just assumed that if you were self-employed, you were self-employed. It turns out most software developers I have talked to about this topic have the same views I did – before I knew better. I’ve written in the past about how to quit your job, but this post is a bit different. This post is not really about how to quit your job, but the different levels of self-employment you can attain, after you do so. The four levels The four levels I am about to describe are based on the level of freedom you experience in your work; they have nothing to do with skill level. But, generally we progress up these levels as we seek to, and hopefully succeed, in gaining more freedom.  So, most software developers start at level one, and the first time they become self-employed, it is usually at level three – although it is possible to skip straight to three. Here is a quick definition of the levels (I’ll cover them each in detail next.)Employed – you work for someone else Freelancer – you are your boss, but you work for many someone elses Product creator – you are your own boss, but your customers determine what you work on Financially free – you work on what you want when you want; you don’t need to make moneyI started my career at level one and bounced back and forth between level two and level one for quite awhile before I finally broke through to level three. I’m currently working on reaching level four – although, I’ve found that it is easy to stay at level three even though you could move to four. Along the way, I’ve found that at each level I was at, I assumed I would feel completely free when I reached the level above. But, each time I turned out to be wrong. While each level afforded me more general freedom, each level also seemed to not be what I imagined it would be. Level one: employedLike I said, most software developers start out at this level. To be honest, most software developers stay at this level – and don’t get me wrong, there is nothing wrong with staying there – as long as you are happy. At this level, you don’t have much freedom at all, because you basically have to work on what you are told to work on, you have to work when you are told and you are typically tied to a physical location. (Throughout this post, you’ll see references to these three degrees of freedom.) Working for someone else isn’t all that bad. You can have a really good job that pays really well, but in most cases you are trading some amount of security for a certain amount of bondage. You are getting a stable paycheck on a regular interval, but at the cost of a large portion of your freedom. Now, that doesn’t mean that you can’t have various levels of traditional employment. I think there are mini-levels of freedom that exist even when you are employed by someone else. For example, you are likely to be afforded more freedom about when you start and leave work as you move up and become more senior at a job. You are also likely to be given a bit more autonomy over what you do – although Agile methodologies may have moved us back in that regard. You might even find freedom from location if you are able to find a job that allows you to telecommute. In my quest for more freedom, I actually made a trade of a considerable amount of pay in order to accept a job where I would have the freedom of working from home. I erroneously imagined that working from home would be the ultimate freedom and that I would be content working for someone else the rest of my career, so long as I could do it from home. (Don’t get me wrong, working from home has its perks, but it also has its disadvantages as well. When I worked from home, I felt more obligated to get more work done to prove that I wasn’t just goofing off. I also felt that my work was never done.) Now, like I said before, more people will stay at level one and perhaps move around, gaining more freedom through things like autonomy and a flexible working schedule, but there are definite caps on freedom at this level. No one is going to pay you to do what you want and tell you that you can disappear whenever you want to. You are also going to have your income capped. You can only make so much money working for someone else and that amount is mostly fixed ahead of time. Level two: freelancerSo, this is the only other level that I had really imagined existed for a software developer, for most of my career. I remember thinking about how wonderful it would be to work on my own projects with my own clients. I imagined that as a freelancer I could bid on government contracts and spend a couple of years doing a contract before moving on to the next. I also imagined an alternative where I worked for many different clients, working on different jobs at different times – all from the comfort of my PJs. When most software developers talk about quitting their jobs and becoming self-employed, I think this is what they imagine. They think, I like I did, that this is the ultimate level of freedom. It didn’t take me very long as a freelancer to realize that there really wasn’t much more freedom, working as a freelancer, than there was working for someone else. First of all, if you have just one big client, like most starting freelancers do, you are basically in a similar situation as what you are when you are employed – the big difference is that now you can’t bill for those hours you were goofing off. You will likely have more freedom about your working hours and days, but you’ll be confined to the project your client has hired you to work on and you might even have to come into their office to do the work. This doesn’t mean that you don’t have more freedom though, it is just a different kind. If you have multiple clients, you have more control over your life and what you work on. You can set your own rate, you can set your own hours and you can potentially turn down work that you don’t want to do – although, in reality, you probably won’t be turning down anything – especially if you are just starting out. Don’t get me wrong, it is nice to have your own company and to be able to bill your clients, instead of being compelled to work for one boss who has ultimate control over your life, but freelancing is a lot of work and on a daily basis it may be difficult to actually feel more free than you would working for someone else. Given the choice of just doing freelancing work or working for someone else, I’d rather just take the steady paycheck. I wouldn’t have said this five years ago, but I know now that freelancing is difficult and stressful. I really wouldn’t go down this road unless you know this is what you want to do or you are using it as a stepping stone to get to somewhere else. From a pay perspective, a freelancer can make a lot more money than most employees. I currently do freelance work and I don’t accept any work for less than $300 an hour. Now, I didn’t start at that rate – when I first started out $100 an hour was an incredible rate – but, I eventually worked my way up to it. (If you want to find out how, check out my How to Market Yourself as a Software Developer package.) The big thing though, is that your pay is not capped. The more you charge and the more hours you work, the more you make. You are only limited by the limits of those two things combined. Level three: product creatorThis level is where things get interesting. When I was mostly doing freelancing, I realized that my key mistake was not in working for someone else, but in trading dollars for hours. I realized that as a freelancer my life was not as beautiful as I had imagined it. I was not really free, because if I wasn’t working I wasn’t getting paid. I actually ended up going back to fulltime employment in order to rethink my strategy. The more and more I thought about it, the more I realized that in order to really gain the kind of freedom I wanted, I would need to create some kind of product that I could sell or some kind of service that would generate me income all the time without me having to work all the time. There are many ways to reach this level, but perhaps the most common is to build some kind of software or software as a service (SASS) that generates income for you. You can then make money from selling that product and you get to work on that product when and how you see fit. You can also reach this level by selling digital products of some sort. I was able to reach this level through a combination of this blog, mobile apps I built, creating royalty generating courses for Pluralsight and my own How to Market Yourself as a Software Developer package. (Yes, I have plugged it twice now, but hey this is my blog – and this is how I make money.) You have quite a bit of freedom at this level. You no longer have any real boss. There is no pointy-haired boss telling you what to work on and you don’t have clients telling you what projects to work on either. You most likely can work from anywhere you want and whenever you want. You can even disappear for months at a time – so long as you figure out a way to handle support. Now, that doesn’t mean that everything is peaches and roses at this level either. For one thing, I imagined that if I was creating products, that I would get to work on exactly what I wanted to work on. This is far from the truth. I have a large degree of control over what I choose to work on and create, but because I am bound by the need to make money, I have to give a large portion of that control over to the market. I have to build what my customers will pay for. This might not seem like a big deal, but it is. I’ve always had the dream of writing code and working on my own projects. I dreamed that being a product creator and making money from my own products would give me that freedom. To some degree it does, but I also have to pay careful attention to what my audience and customers want and I have to put my primary focus on building those things. This level is also quite stressful, because everything depends on you. You have to be successful to get paid. When you are an employee, all you have to do is show up. When you are a freelancer, you just have to get clients and do the work – you get paid for the work you put in, not the results. When you are a product creator, you might spend three months working on something and not make a dime. No one cares how much work you did, only results count. As far as income potential, there is no cap here. You might struggle to just make enough to live, but if you are successful, there is no limit to how much you could earn, since you are not bound by time. At this level you are no longer trading hours for dollars. To me, it isn’t worth striving for level two, it is better to just work for someone else until you can reach level three, because this level of freedom is one that actually makes a big difference in your life. You still may not be able to work on just what you want to work on, but at least at this point – once you are successful – all the other areas of your life start to become much more free. Level four: financially freeI couldn’t come up with a good name for this level, but this is the level where you no longer have to worry about making money. One thing I noticed when I finally reached level three was that a large portion of what was holding me back from potentially doing exactly what I wanted to do was the need to generate income. Now, it’s true that you can work on what you want to work on and make money doing it, but often the need to generate income tends to influence what you work on and how you work on it. For example, I’d really like to create a video game. I’ve always dreamed of doing a large game development project. But, I know it isn’t likely to be profitable. As long as I am worrying about income, my freedom is going to be limited to some degree. If I don’t have passive income coming in that is more than enough to sustain me, I can’t just quit doing the projects that do make me money and start writing code for a video game – well, I could, but it wouldn’t be smart, and I’d feel pretty guilty about it. So, in my opinion, the highest form of freedom a software developer can achieve is when they are financially free. What do I mean by financially free though? It basically means that you don’t have to worry about cash. Perhaps you sold your startup for several millions of dollars or you have passive income coming in from real estate or other investments that more than provides for your daily living needs. (For some good information on how to do this or how this might work, I recommend starting with the book “Rich Dad Poor Dad”.) At this level of freedom, you can basically do what you want. You can create software that interests you, because it interests you—you aren’t worried about profitability. Want to create an Android app, just because, go ahead. Want to learn a new programming language, because you think it would be fun – go for it. This has always been the level of freedom I have secretly wanted. I never wanted to sit back and not do anything, but I’ve always wanted to work on what interested me and only what interested me. Every other level that I thought would have this freedom, I realized didn’t. I realized that there was always something else that was controlling what I worked on, be it my boss, my clients or my customers. Now, this doesn’t mean that you can’t still make money from your projects. In fact, paradoxically, I believe, if you can get to this stage, you have the potential to make the most money. Once you start working on what you want to work on, you are more likely to put much more passionate work into it and it is very likely that it will be of high value. This is where programming becomes more like art. I don’t have any proof of this, of course, but I suspect that when you don’t care about making money, because you are just doing what you love, that is when you make the most of it. Don’t get me wrong, you might be able to focus on doing what you love, even if you aren’t making any money. I know plenty of starving artists do – or at least they tell themselves they do – but, I can’t do it. I’ve tried it, but I always feel guilty and stressed about the fact that what I am working on isn’t profitable. In my opinion, you really have to be financially free to experience true creative freedom. I’m actually working on getting to this level. Technically, I could say I am there now, but I am still influenced greatly by profitability. Although, now, I am not choosing my projects solely on the criteria of what will make the most amount of money. I am turning down more and more projects and opportunities that don’t align with what I want to do as I am trying to transition to working on only what interests me as my passive income is increasing. What can you gather from all this? Well, the biggest thing is that freedom has different levels and that, perhaps, you don’t want to be a freelancer, after all. I think many software developers assume working for themselves by freelancing will give them the ultimate freedom. They don’t realize that they’ll only be able to work on exactly what they want to work on when they are actually financially free. So, my advice to you is that if you want to have full creative control over your life and what you work on, work on becoming financially free. If you want a high degree of autonomy in most of the areas of your life, you should try to develop and sell products. If you are happy just being your own boss, even if you have to essentially take orders from clients, freelancing might be the road for you. And, if all of this just seems like too high of a price to pay, you might want to just stay where you are at and keep collecting your weekly paychecks – nothing wrong with that.Reference: The 4 Levels of Freedom For Software Developers from our JCG partner John Sonmez at the Making the Complex Simple blog....
java-logo

Creating Your Own Java Annotations

If you’ve been programming in Java and using any one of the popular frameworks like Spring and Hibernate, you should be very familiar with using annotations. When working with an existing framework, its annotations typically suffice. But, have you ever found a need to create your own annotations? Not too long ago, I found a reason to create my own annotations for a project that involved verifying common data stored in multiple databases.         The Scenario The business had multiple databases that were storing the same information and had various means of keeping the data up to date. The business had planned a project to consolidate the data into a master database to alleviate some of the issues involved with having multiple sources of data. Before the project could begin however, the business needed to know how far out of sync the data was and make any necessary corrections to get in back in sync. The first step required creating a report that showed common data that belonged in multiple databases and validated the values, highlighting any records that didn’t match according to the reconciliation rules defined. Here’s a short summary of the requirements at the time:Compare the data between multiple databases for a common piece of data, such as a customer, company, or catalog information. By default the value found should match exactly across all of the databases based upon the type of value. For certain fields we only want to display the value found and not perform any data comparison. For certain fields we only want to compare the value found and perform data verification on the specific data sources specified. For certain fields we may want to do some complicated data comparisons that may be based on the value of other fields within the record. For certain fields we may want to format the data in a specific format, such as $000,000.00 for monetary amounts. The report should be in MS Excel format, each row containing the field value from each source. Any row that doesn’t match according to the data verification rules should be highlighted in yellow.Annotations After going over the requirements and knocking around a few ideas, I decided to use annotations to drive the configuration for the data comparison and reporting process. We needed something that was somewhat simple, yet highly flexible and extensible. These annotations will be at the field level and I like the fact that the configuration won’t be hidden away in a file somewhere on the classpath. Instead you’ll be able to look at the annotation associated with a field to know exactly how it will be processed. In the simplest terms, an annotation is nothing more than a marker, metadata that provides information but has no direct effect on the operation of the code itself. If you’ve been doing Java programming for a while now you should be pretty familiar with their use, but maybe you’ve never had a need to create your own. To do that you’ll need to create a new type that uses the Java type @interface that will contain the elements that specify the details of the metadata. Here’s an example from the project: @Target(ElementType.FIELD) @Retention(RetentionPolicy.RUNTIME) public @interface ReconField {/** * Value indicates whether or not the values from the specified sources should be compared or will be used to display values or reference within a rule. * * @return The value if sources should be compared, defaults to true. */ boolean compareSources() default true;/** * Value indicates the format that should be used to display the value in the report. * * @return The format specified, defaulting to native. */ ReconDisplayFormat displayFormat() default ReconDisplayFormat.NATIVE;/** * Value indicates the ID value of the field used for matching source values up to the field. * * @return The ID of the field. */ String id();/** * Value indicates the label that should be displayed in the report for the field. * * @return The label value specified, defaults to an empty string. */ String label() default "";/** * Value that indicates the sources that should be compared for differences. * * @return The list of sources for comparison. */ ReconSource[] sourcesToCompare() default {};} This is the main annotation that will drive how the data comparison process will work. It contains the basic elements required to fulfill most of the requirements for comparing the data amongst the different data sources. The @ReconField should handle most of what we need except for the requirement of more complex data comparison, which we’ll go over a little bit later. Most of these elements are explained by the comments associated with each one in the code listing, however there are a couple of key annotations on our @ReconField that need to be pointed out.@Target – This annotation allows you to specify which java elements your annotation should apply to. The possible target types are ANNOTATION_TYPE, CONSTRUCTOR, FIELD, LOCAL_VARIABLE, METHOD, PACKAGE, PARAMETER and TYPE. In our @ReconField annotation it is specific to the FIELD level. @Retention – This allows you to specify when the annotation will be available. The possible values are CLASS, RUNTIME and SOURCE. Since we’ll be processing this annotation at RUNTIME, that’s what this needs to be set to.This data verification process will run one query for each database and then map the results to a common data bean that represents all of the fields for that particular type of business record. The annotations on each field of this mapped data bean tell the processor how to perform the data comparison for that particular field and its value found on each database. So let’s look at a few examples of how these annotations would be used for various data comparison configurations. To verify that the value exists and matches exactly in each data source, you would only need to provide the field ID and the label that should be displayed for the field on the report. @ReconField(id = CUSTOMER_ID, label = "Customer ID") private String customerId; To display the values found in each data source, but not do any data comparisons, you would need to specify the element compareSources and set its value to false. @ReconField(id = NAME, label = "NAME", compareSources = false) private String name; To verify the values found in specific data sources but not all of them, you would use the element sourcesToCompare. Using this would display all of the values found, but only perform any data comparisons on the data sources listed in the element. The handles the case in which some data is not stored in every data source. ReconSource is an enum that contains the data sources available for comparison. @ReconField(id = PRIVATE_PLACEMENT_FLAG, label = "PRIVATE PLACEMENT FLAG", sourcesToCompare ={ ReconSource.LEGACY, ReconSource.PACE }) private String privatePlacementFlag; Now that we’ve covered our basic requirements, we need to address the ability to run complex data comparisons that are specific to the field in question. To do that, we’ll create a second annotation that will drive the processing of custom rules. @Target(ElementType.FIELD) @Retention(RetentionPolicy.RUNTIME) public @interface ReconCustomRule {/** * Value indicates the parameters used to instantiate a custom rule processor, the default value is no parameters. * * @return The String[] of parameters to instantiate a custom rule processor. */ String[] params() default {};/** * Value indicates the class of the custom rule processor to be used in comparing the values from each source. * * @return The class of the custom rule processor. */ Class<?> processor() default DefaultReconRule.class;} Very similar to the previous annotation, the biggest difference in the @ReconCustomRule annotation is that we are specifying a class that will execute the data comparison when the recon process executes. You can only define the class that will be used, so your processor will have to instantiate and initialize any class that you specify. The class that is specified in this annotation will need to implement a custom rule interface, which will be used by the rule processor to execute the rule. Now let’s take a look at a couple of examples of this annotation. In this example, we’re using a custom rule that will check to see if the stock exchange is not the United States and skip the data comparison if that’s the case. To do this, the rule will need to check the exchange country field on the same record. @ReconField(id = STREET_CUSIP, label = "STREET CUSIP", compareSources = false) @ReconCustomRule(processor = SkipNonUSExchangeComparisonRule.class) private String streetCusip; Here’s an example where we are specifying a parameter for the custom rule, in this case it’s a tolerance amount. For this specific data comparison, the values being compared cannot be off by more than 1,000. By using a parameter to specify the tolerance amount, this allows us to use the same custom rule on multiple fields with different tolerance amounts. The only drawback is that these parameters are static and can’t be dynamic due to the nature of annotations. @ReconField(id = USD_MKT_CAP, label = "MARKET CAP USD", displayFormat = ReconDisplayFormat.NUMERIC_WHOLE, sourcesToCompare = { ReconSource.LEGACY, ReconSource.PACE, ReconSource.BOB_PRCM }) @ReconCustomRule(processor = ToleranceAmountRule.class, params = { "10000" }) private BigDecimal usdMktCap; As you can see, we’ve designed quite of bit of flexibility into a data verification report for multiple databases by just using a couple of fairly simple annotations. For this particular case, the annotations are driving the data comparison processing so we’re actually evaluating the annotations that we find on the mapped data bean and using those to direct the processing. Conclusion There are numerous articles out there already about Java annotations, what they do, and the rules for using them. I wanted this article to focus more on an example of why you might want to consider using them and see the benefit directly. Keep in mind that this is only the starting point, once you have decided on creating annotations you’ll still need to figure out how to process them to really take full advantage of them. In part two, I’ll show you how to process these annotations using Java reflection. Until then, here are a couple of good resources to learn more about Java annotations:The Java Annotation Tutorial – http://docs.oracle.com/javase/tutorial/java/annotations/ Java Annotations – http://tutorials.jenkov.com/java/annotations.html How Annotations Work – http://java.dzone.com/articles/how-annotations-work-javaReference: Creating Your Own Java Annotations from our JCG partner Jonny Hackett at the Keyhole Software blog....
json-logo

Converting JSON to XML to Java Objects using XStream

XStream library can be an effective tool for converting JSON to Java to XML translations to and fro. Lets explore each one of them one by one, and see which driver is used. Handling JSONs To convert JSON to Java objects all you have to do is initialize XStream object with an appropriate driver and you are ready to serialise your objects to (and from) JSON. XStream currently delivers two drivers for JSON to Object ocnversion:  JsonHierarchicalStreamDriver: This does not have an additional dependency, but can only be used to write XMLJettisonMappedXmlDriver: This is based on Jettison and can also deserialize JSON to Java objects again.Jettison driver Jettison driver uses Jettison StAX parser to read and write data in JSON format. It is available in XStream since version 1.2.2 and is implemented in com.thoughtworks.xstream.io.json.JettisonMappedXmlDriver class. In order to get this working, we need to add the dependencies in pom : <dependencies> <dependency> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> <version>1.4.7</version> </dependency> <dependency> <groupId>org.codehaus.jettison</groupId> <artifactId>jettison</artifactId> <version>1.1</version> </dependency> </dependencies> And the code to convert JSON to object and object to Json : XStream xstream = new XStream(new JettisonMappedXmlDriver()); xstream.toXML(xml); //converts Object to JSON xstream.fromXML(obj); //Converts Json to Object Serializing an object to XML To serialize an Object to XML XStream uses 2 drivers :StaxDriver XStream xstream = new XStream(new StaxDriver()); xstream.toXML(xml); //converts Object to XML xstream.fromXML(obj); //Converts XML to ObjectDomDriver XStream xstream = new XStream(new DomDriver()); xstream.toXML(xml); //converts Object to XML xstream.fromXML(obj); //Converts XML to ObjectFinally, lets see all these in one class: package com.anirudh;import com.thoughtworks.xstream.XStream; import com.thoughtworks.xstream.io.json.JettisonMappedXmlDriver; import com.thoughtworks.xstream.io.xml.DomDriver; import com.thoughtworks.xstream.io.xml.StaxDriver;/** * Created by anirudh on 15/07/14. */ public class Transformer<T> {private static final XStream XSTREAM_INSTANCE = null;public T getObjectFromJSON(String json){ return (T) getInstance().fromXML(json); }public String getJSONFromObject(T t){ return getInstance().toXML(t); }private XStream getInstance(){ if(XSTREAM_INSTANCE==null){ return new XStream(new JettisonMappedXmlDriver()); } else { return XSTREAM_INSTANCE; } }public T getObjectFromXML(String xml){ return (T)getStaxDriverInstance().fromXML(xml); }public String getXMLFromObject(T t){ return getStaxDriverInstance().toXML(t); }public T getObjectFromXMLUsingDomDriver(String xml){ return (T)getDomDriverInstance().fromXML(xml); }public String getXMLFromObjectUsingDomDriver(T t){ return getDomDriverInstance().toXML(t); }private XStream getStaxDriverInstance(){ if(XSTREAM_INSTANCE==null) { return new XStream(new StaxDriver()); }else{ return XSTREAM_INSTANCE; } }private XStream getDomDriverInstance(){ if(XSTREAM_INSTANCE==null){ return new XStream(new DomDriver()); }else{ return XSTREAM_INSTANCE; } } } Write a JUnit class to test it: package com.anirudh;import com.anirudh.domain.Product; import org.junit.Before; import org.junit.Test;/** * Created by anirudh on 15/07/14. */ public class TransformerTest {private Transformer<Product> productTransformer; private Product product; @Before public void init(){ productTransformer = new Transformer<Product>(); product = new Product(123,"Banana",23.00); } @Test public void testJSONToObject(){ String json = productTransformer.getJSONFromObject(product); System.out.println(json); Product convertedproduct = productTransformer.getObjectFromJSON(json); System.out.println(convertedproduct.getName()); }@Test public void testXMLtoObjectForStax(){ String xml = productTransformer.getXMLFromObject(product); System.out.println(xml); Product convertedproduct = productTransformer.getObjectFromXML(xml); System.out.println(convertedproduct.getName()); }@Test public void testXMLtoObjectForDom(){ String xml = productTransformer.getXMLFromObjectUsingDomDriver(product); System.out.println(xml); Product convertedproduct = productTransformer.getObjectFromXMLUsingDomDriver(xml); System.out.println(convertedproduct.getName()); }} The full code can be seen here. In the next blog, we will compare the use cases, exploring where what fits in.Reference: Converting JSON to XML to Java Objects using XStream from our JCG partner Anirudh Bhatnagar at the anirudh bhatnagar blog....
agile-logo

Agile Myth #1: “Agile is a Methodology”

First of all, if you look up the word “methodology” in the dictionary, it says, “study of methods”. When people in the technical or research fields say “methodology”, they really mean “process”. So what is a process? A process is basically a set of instructions:  You follow Step 1, Step 2, Step 3… You use this activity in situation A, you use another activity in situation B. You use this document template, that design notation, etc.  Now process is essential to writing good software. All the “Agile Methodologies” – Scrum, XP, Kanban, etc. – prescribe processes. However, what differentiates Agile processes from other processes like RUP (which is a very good process if evaluated solely on process) is the underlying values and principles. So Scrum, XP, Kanban – these are processes that are founded on Agile. But Agile itself is not a process. Rather, Agile is the values and principles that allow software development processes to be successful. Read the Agile Manifesto, the Principles Behind the Agile Manifesto, the Values of Extreme Programming, the 7 Key Principles of Lean Software Development. Agile processes try to reduce the amount of process in software development, but something has to replace the what has been removed, or the remaining lightweight processes fall apart. It is values and principles that replace the heavyweight processes that have been removed, so that the lightweight processes have a foundation to stand. Let me go through some of the values and principles common to many Agile processes: Customer Value Kent Beck, the inventor of XP, said the reason he invented XP was “to heal the divide between business people and programmers.” Traditional processes tend to put development teams and business stakeholders at odds with each other, which is wrong, since the very purpose of a development team is to create something that is valuable and useful for the business stakeholders. Agile reminds development teams of this primary purpose – that we exist to create things of value to our customers. This is the main motivation for everything we do – why we solicit feedback, why we deliver in short iterations, why we allow changes even late in a project, why we place such strong emphasis on testing… The first characteristic of a team that is really Agile, as opposed to one that is just going through the motions of Agile, is that the members of the team are primarily motivated by the desire to create things of value for their customers. Evidence-Based Decision-Making In Agile, emphasis is placed on making decisions based on evidence. We make estimates after we’ve done a few iterations and have an idea of our team’s velocity, not estimates just pulled out of someone’s butt without any basis. We derive requirements based on actual customer feedback, not what we think our customers will want even if the customers have not had the chance to try out a working system. We base our architectures on Spikes – small prototypes which we subject to analysis and tests – not just vendor documentation or articles that we’ve read, or whatever is the hype technology of the year. Technical Excellence Of the 17 authors of the Agile Manifesto, many of them are thought leaders in the engineering side of software development – Kent Beck is the creator of JUnit and Test-Driven Development, Martin Fowler has written several books on Design Patterns and Refactoring, Robert Martin has written several books on Object-Oriented Design and code quality… A lot of people think Agile is mainly about project management, but the creators of Agile put equal if not greater emphasis on engineering as well. Feedback, Visibility, Courage Agile is about bringing up problems early and often, when they’re still easier to fix. We have stand-up meetings not so that we can update the project manager of our work, but so that we can bring up issues that others can help us with. We have big visible charts so that everyone, not just in the team but in the organization, knows about the progress of the work and if there are any issues. We emphasize testing so that we can discover issues earlier rather than later. For all this to work, there needs to be a culture of courage. This is especially difficult for Filipino culture. When there’s a problem, we Filipinos tend to keep it to ourselves and fix it ourselves – we’ll work unpaid overtime and weekends to try to fix a problem, but when we fail the problem just ends up bigger then when we started. A culture of courage is fundamental in creating a culture and process of feedback and visibility. Eliminating Waste One way we eliminate waste is with requirements. When we specify a lot of requirements upfront, a lot of work in specifying and building the requirements gets wasted, since it’s only when customers are actually able to use a product do we realize that a lot of the requirements were wrong. Instead, in Agile, we specify enough requirements to build the smallest releasable system, so that we can get reliable feedback from customers before building more requirements. Another example is by limiting a team’s “Work in Process” or WIP. For example, if the team notices that number of stories are “in process” exceeds their self-designated “WIP Limit”, they stop and check what’s causing the problem. If, for example, the problem is that the developers are building more features than the testers can test, the developers should stop working and instead help with the testing. Otherwise, untested work just piles up behind the testers as waste. Human Interaction One of the most wasteful things I’ve seen is how people hide behind emails, documents and issue-tracking tools to communicate, and this just leads to issues dragging on and on. A single issue discussed over email might end up being a thread of over a hundred emails, across several departments separated by different floors, with multiple managers CC’d to the discussion. Often, many of these issues that drag on for days or weeks could be resolved in minutes with people just having a conversation with one another. This is why Agile advocates a co-located, interdisciplinary team. A developer can just talk to the DBA across the table in order to resolve an issue within minutes, in less time than it would have taken the issue to be typed up on an email or as a ticket. This is why requirements are initially written as very brief User Stories, to serve simply as a placeholder for face-to-face discussions between the Product Owner and the development team. This is why stand-up meetings are preferred to status reports, because spontaneous collaboration can be initiated there, whereas status reports are largely ignored except for the most meticulous of project managers. Others There’s a number of other principles and values that I haven’t covered here, that are probably much better explained at their respective sources, anyway. Again, I recommend reading up on the Agile Manifesto, the Principles Behind the Agile Manifesto, the Values of Extreme Programming, and the 7 Key Principles of Lean Software Development. This is the first in a series of twelve myths of Agile that I will be discussing, based on my presentation at SofTech 2014 software engineering conference. Hang on for my next blog post, where I will discuss Agile Myth #2: “Agile is About Project Management”.Reference: Agile Myth #1: “Agile is a Methodology” from our JCG partner Calen Legaspi at the Calen Legaspi blog....
software-development-2-logo

Through The Looking Glass Architecture Antipattern

An anti-pattern is a commonly recurring software design pattern that is ineffective or counterproductive. One architectural anti-pattern I’ve seen a number of times is something I’ll call the “Through the Looking Glass” pattern. It’s named so because APIs and internal representations are often reflected and defined by their consumers. The core of this pattern is that the software components are split in a manner that couples multiple components in an inappropriate manner. An example is software that is split between a “front end” and a “back end”, but they both use the same interfaces and/or value objects to do their work. This causes a situation where almost every back end change requires a front-end change and visa versa. As particularly painful examples, think of using GWT (in general) or trying to use SOAP APIs through javascript. There are a number of other ways this pattern can show up… most often it can be a team structure problem. Folks will be split by their technical expertise, but then impede progress because the DBA’s who are required to make database changes are not aware of what the front end application needs so the front end developers end up spend a large amount of time “remote controlling” the database team. It can also be a situation where there is a desire for a set of web service APIs are exposed for front end applications, but because the back-end service calls are only in existence for the front end application, there end up being a number of chicken and egg situations. In almost every case, the solution to this problem is to either #1 add a translation layer, #2 #1 simplify the design, or #3 restructure the work such that it can be isolated on a component by component basis. In the web service example above, it’s probably more appropriate to use direct integration for the front end and expose web services AFTER the front end work has been done. For the database problem, the DBA probably should be embedded with the front-end developer and “help” with screen design so that they have a more complete understanding of what is going on. An alternate solution to the database problem might be to allow the front end developer to build the initial database (if they have the expertise) and allow the DBA to tune the design afterwords. I find that it’s most often easiest and most economical to add a translation layer between layers than trying to unify the design unless the solution space can be clearly limited in scope and scale. I say this because modern rapid development frameworks ([g]rails, play…) now support this in a very simple manner and there is no great reason to NOT do it…except maybe ignorance.Reference: Through The Looking Glass Architecture Antipattern from our JCG partner Mike Mainguy at the mike.mainguy blog....
java-logo

Java’s Volatile Modifier

A while ago I wrote a Java servlet Filter that loads configuration in its init function (based on a parameter from web.xml). The filter’s configuration is cached in a private field. I set the volatile modifier on the field. When I later checked the company Sonar to see if it found any warnings or issues in the code I was a bit surprised to learn that there was a violation on the use of volatile. The explanation read:        Use of the keyword ‘volatile’ is generally used to fine tune a Java application, and therefore, requires a good expertise of the Java Memory Model. Moreover, its range of action is somewhat misknown. Therefore, the volatile keyword should not be used for maintenance purpose and portability.I would agree that volatile is misknown by many Java programmers. For some even unknown. Not only because it’s never used much in the first place, but also because it’s definition changed since Java 1.5. Let me get back to this Sonar violation in a bit and first explain what volatile means in Java 1.5 and up (until Java 1.8 at the time of writing). What is Volatile? While the volatile modifier itself comes from C, it has a completely different meaning in Java. This may not help in growing an understanding of it, googling for volatile could lead to different results. Let’s take a quick side step and see what volatile means in C first. In the C language the compiler ordinarily assumes that variables cannot change value by themselves. While this makes sense as default behavior, sometimes a variable may represent a location that can be changed (like a hardware register). Using a volatile variable instructs the compiler not to apply these optimizations. Back to Java. The meaning of volatile in C would be useless in Java. The JVM uses native libraries to interact with the OS and hardware. Further more, it is simply impossible to point Java variables to specific addresses, so variables actually won’t change value by themselves. However, the value of variables on the JVM can be changed by different threads. By default the compiler assumes that variables won’t change in other threads. Hence it can apply optimizations such as reordering memory operations and caching the variable in a CPU register. Using a volatile variable instructs the compiler not to apply these optimizations. This guarantees that a reading thread always reads the variable from memory (or from a shared cache), never from a local cache. Atomicity Further more on a 32 bit JVM volatile makes writes to a 64 bit variable atomic (like longs and doubles). To write a variable the JVM instructs the CPU to write an operand to a position in memory. When using the 32 bit instruction set, what if the size of a variable is 64 bits? Obviously the variable must be written with two instructions, 32 bits at a time. In multi-threaded scenarios another thread may read the variable half way through the write. At that point only first half of the variable is written. This race-condition is prevented by volatile, effectively making writes to 64 bit variables atomic on 32 bit architectures. Note that above I talked about writes not updates. Using volatile won’t make updates atomic. E.g. ++i when i is volatile would read the value of i from the heap or L3 cache into a local register, inc that register, and write the register back into the shared location of i. In between reading and writing i it might be changed by another thread. Placing a lock around the read and write instructions makes the update atomic. Or better, use non-blocking instructions from the atomic variable classes in the concurrent.atomic package. Side Effect A volatile variable also has a side effect in memory visibility. Not just changes to the volatile variable are visible to other threads, but also any side effects of the code that led up to the change are visible when a thread reads a volatile variable. Or more formally, a volatile variable establishes a happens-before relationship with subsequent reads of that variable. I.e. from the perspective of memory visibility writing a volatile variable effectively is like exiting a synchronized block and reading a volatile variable like entering one. Choosing Volatile Back to my use of volatile to initialize a configuration once and cache it in a private field. Up to now I believe the best way to ensure visibility of this field to all threads is to use volatile. I could have used AtomicReference instead. Since the field is only written once (after construction, hence it cannot be final) atomic variables communicate the wrong intent. I don’t want to make updates atomic, I want to make the cache visible to all threads. And for what it’s worth, the atomic classes use volatile too. Thoughts on this Sonar Rule Now that we’ve seen what volatile means in Java, let’s talk a bit more about this Sonar rule. In my opinion this rule is one of the flaws in configurations of tools like Sonar. Using volatile can be a really good thing to do, if you need shared (mutable) state across threads. Sure thing you must keep this to a minimum. But the consequence of this rule is that people who don’t understand what volatile is follow the recommendation to not use volatile. If they remove the modifier effectively they introduce a race-condition. I do think it’s a good idea to automatically raise red flags when misknown or dangerous language features are used. But maybe this is only a good idea when there are better alternatives to solve the same line of problems. In this case, volatile has no such alternative. Note that in no way this is intended as a rant against Sonar. However I do think that people should select a set of rules that they find important to apply, rather than embracing default configurations. I find the idea to use rules that are enabled by default, a bit naive. There’s an extremely high probability that your project is not the one that tool maintainers had in mind when picking their standard configuration. Furthermore I believe that as you encounter a language feature that you don’t know, you should learn about it. As you learn about it you can decide if there are better alternatives. Java Concurrency in Practice The de facto standard book about concurrency in the JVM is Java Concurrency in Practice by Brain Goetz. It explains the various aspects of concurrency in several levels of detail. If you use any form of concurrency in Java (or impure Scala) make sure you at least read the former three chapters of this brilliant book to get a decent high-level understanding of the matter.Reference: Java’s Volatile Modifier from our JCG partner Bart Bakker at the Software Craft blog....
software-development-2-logo

Test Attribute #2: Readability

This is the 2nd post on test attributes that were described in the now famous “How to test your tests” post. We often forget the most of the value we get from tests come after we’ve written them. Sure, TDD helps design the code, but let’s face it, when everything works for the first time, our tests become the future guardians of our code. Once in place, we can change the code, hopefully for the better, knowing that everything still works. But if (and when) something breaks, there’s work to be done. We need to understand what worked before that doesn’t now. We then need to analyze the situation: According to what we’re doing right now, should we fix this problem? Or is this new functionality that we now need to cover with new tests, throwing away the old one? Finally there’s coding and testing again, depending on the result of our analysis. The more we move further in time, the tests and code get stale in our mind. Then we forget about them completely. The cost of the changes rises. The analysis phase becomes longer, because we need to reacquaint ourselves with the surroundings. We need to re-learn what still works, and what stopped working. Even if we knew, we don’t remember which  changes can cause side effects, and how those will work out. Effective tests minimize this process. They need to be readable. Readability is subjective.  What I find readable now (immediately after I wrote it), will not seem so in 6 months. Let alone to someone else. So instead of trying to define test readability, let’s break it down to elements we care about, and can evaluate. What A Name The most important part of a test (apart from testing the right thing) is its name. The reason is simple: When a test breaks, the name is the first thing we see when the test fails. This is the first clue we get that something is wrong, and therefore, it needs to tell us as much as possible. The name of a test should include (at least) the specific scenario and the expected result of our test. If we’re testing APIs, it should say that too. For example: @Test public void divideCounterBy5Is25() { ... I can understand what the test does (a scenario about dividing Counter), the details (division by 5) and the expected result for this scenario (25). If it sounds like a sentence – even better. Good names come from verbally describing them. It doesn’t matter if you use capitalization, underscore, or whatever you choose. It is important that you use the same convention that your team agrees on. Names should also be specific enough to mentally discern from other sibling tests. So, in our example: @Test public void divideCounterBy0Throws() { ... This test is similar enough to the first name to identify it as a “sibling” scenario, because of the resemblance in the prefix. The specific scenario and result are different. It is important, because when those two will appear together in the test runner, one fails and one doesn’t, it helps us locate the problem before even starting to debug. These are clues to resolve the problem. What A Body If our names don’t help locate the problem, the test body should fill the gaps. It should contain all the information needed to understand the scenarios. Here are a few tips to make test code readable:Tests should be short. About 10-15 lines short. If the setup is long, extract it to functions with descriptive names. Avoid using pre-test functions like JUnit’s @Before or MSTest [TestInitialize]. Instead use methods called directly from the test. When you look at the code, setup and tear down need to be visible, otherwise, you’ll need to search further, and assume even more. Debugging tests with setup methods is no fun either, because you enter the test in a certain context that may surprise you. Avoid using base test classes.  They too hide information relevant for understanding the scenarios. Preferring composition over inheritance works here too. Make the assert part stand out. Make sure that body and name align.Analysis takes time, and the worst kind (and slowest) requires debugging. Tests (and code) should be readable enough to help us bypass this wasteful process. Better Readability In Minutes We are biased and so think that we write code and tests that are so good, everyone can understand it. We’re often wrong. In agile, feedback is the answer. Use the “Law of the Third Ear”. Grab an unsuspecting colleague’s ear, and pull it close to your screen, and she can tell you if she understands what the test does. Even better, pair while writing the tests. Feedback comes us a by product and you get better tests. Whatever you do, don’t leave it for later. And don’t use violence if you don’t need to. Make tests readable now, so you can read them later.Reference: Test Attribute #2: Readability from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
software-development-2-logo

The Life(Cycles) of UX/UI Development

It recently occurred to me that not one of the dozens and dozens of user interfaces I’ve worked on over the years, had the same methodology/lifecycle.  Many of those were results of the environments under which they were constructed: startup, BIG company, government contract, side-project, open-source, freelance, etc. But the technology also played a part in choosing the methodology we used. Consider the evolution of UI technology for a moment. Back in the early days of unix/x-windows, UI development was a grind.  It wasn’t easy to relocate controls and re-organize interactions.  Because of that, we were forced to first spend some time with a napkin and a sharpie, and run that napkin around to get feedback, etc. The “UX” cycle was born. Then came along things like Visual Basic/C++ and WYSIWIG development. The UI design was literally the application. Drag a button here. Double click. Add some logic. Presto, instant prototype… and application. You could immediately get feedback on the look and feel, etc. It was easy to relocate, reorganize things and react to user feedback. What happened to the “UX” cycle?  It collapsed into the development cycle. The discipline wasn’t lost, it was just done differently, using the same tools/framework used for development. Then, thanks to Mr.Gore, the world-wide web was invented, bringing HTML along with it. I’m talking early here, the days of HTML sans javascript frameworks. (horror to think!)  UI development was thrown back into the stone ages. You needed to make sure you got your site right, because adjusting look and feel often meant “rewrite” the entire web site/application. Many of the “roundtrip” interactions and MVC framework, even though physically separated from the browser,  was entwined in the flow/logic of the UI. (Spring Web Flow anyone? http://projects.spring.io/spring-webflow/) In such a world, again — you wanted to make sure you got it right, because adjustments were costly.   Fortunately, in the meantime the UX discipline and their tools had advanced. It wasn’t just about information display, it was about optimizing the interactions.  The tools were able to not only play with look, but they could focus on and mock out feel. We could do a full UX design cycle before any code was written. Way cool. Once blessed, the design is/was handed off to the development team, and implementation began. Fast forward to present day.  JavaScript frameworks have come a long way.  I now know people that can mockup user experience *in code*, faster and more flexibly than the traditional wire-framing tools. This presents an opportunity to once again collapse the toolset and smooth the design/development process into one cohesive, ongoing process instead of the somewhat disconnected, one-time traditional handoff. I liken this to the shift that took place years ago for application design. We used to sit down and draw out the application architecture and the design before coding: class hierarchies, sequence diagrams, etc. (Rational Rose and UML anyone?). But once the IDE’s advanced enough, it became faster to code the design than to draw it out. The disciplines of architecture and design didn’t go away, they are just done differently now. Likewise with UX.  User experience and design are still of paramount importance.  And that needs to include user research, coordination with marketing, etc.  But can we change the toolset at this point, so the design and development can be unified as one?  If so, imagine the smoothed, accelerated design->development->delivery (repeat) process we could construct! For an innovative company like ours, that depends heavily on time-to-market, that accelerated process is worth a ton. We’ll pay extra to get resources that can bridge that gap between UX and development, and play in both worlds. (Provided we don’t need to sacrifice on either!) On that note, if you think you match that persona. Let me know. We have a spot for you! And if you are a UXer, it might be worth playing around with angular and bootstrap to see how easy things have become. We don’t mind on the job training!Reference: The Life(Cycles) of UX/UI Development from our JCG partner Brian ONeill at the Brian ONeill’s Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close