Featured FREE Whitepapers

What's New Here?

java-logo

Identifying JVM – trickier than expected

In Plumbr we have spent the last month by building the foundation for future major improvements. One of such building blocks was addition of the unique identifier for JVM in order to link all sessions from the same JVM together. While it seems a trivial task at the beginning, the complexities surrounding the issue start raising their ugly heads when looking at the output of the JVM-bundled jps command listing all currently running Java processes in my machine: My Precious:tmp my$ jps 1277 start.jar 1318 Jps 1166 Above is listed the output of the jps command listing all currently running Java processes in my machine. If you are unfamiliar with the tool – it lists all processes process ID in the left and process name in the right column. Apparently the only one bothering to list itself under a meaningful name is the jps itself. Other two are not so polite. The one hiding behind the start.jar acronym is a Jetty instance and the completely anonymous one is actually Eclipse. I mean, really – the biggest IDE in the Java world cannot even bother to list itself under a name in the standard java tools? So, with a glimpse to the state of the art in built-in tooling, lets go back to our requirements at hand. Our current solution is identifying a JVM by process ID + machine name combination. This has one obvious disadvantage – whenever the process dies, its reincarnation not going to get the same ID from the kernel. So whenever the JVM Plumbr was monitoring was restarted or killed, we lost track and were not able to bind the subsequent invocations together. Apparently this is not a reasonable behaviour for a monitoring tool, so we went ahead to look for a better solution. Next obvious step was taken three months ago when we allowed our users to specify the name for the machine via -Dplumbr.application.name=my-precious-jvm startup parameter. Wise and obvious as it might seem, during those three months just 2% of our users have actually bothered to specify this parameter. So, it was time to go back to the drawing board and see what options we have when trying to automatically bind unique and human-readable identifier to a JVM instance. Our first approach was to acquire the name of the class with the main() method and use this as an identifier. Immediate drawbacks were quickly visible when we launched the build in the development box containing four different Jetty instances – immediately you had four different JVMs all binding themselves under the same not-so-unique identifier. Next attempt was to parse the content of the application and identify the application from the deployment descriptors – after all, most of the applications monitored by Plumbr are packaged as WAR/EAR bundles, thus it would make sense and use the information present within the bundle. And indeed, vast majority of the engineers have indeed given meaningful names in the <display-name> parameter inside web.xml or application.xml. This solved part of the problem – when all those four Jetty instances are running apps with different <display-name>’s, they would appear as unique. And indeed they did, until our staging environment revealed that this might not always be the case. We had several different Plumbr Server instances on the same machine, using different application servers but deploying the same WAR file with the same <display-name> parameter. As you might guess, this is again killing the uniqueness of such ID. Another issue raised was the fact that there are application servers running several webapps – what will happen when you have deployed several WAR files to your container? So we had to dig further. To distinguish between several JVMs running the same application in the same machine, we added the launch folder to warrant the uniqueness of the identifier. But the problem of multiple WAR’s still persisted. For this we fell back to our original hypothesis where we used the main class name as identifier. Some more technical nuances, such as distinguishing between the actual hash used for ID and the user-friendly version of the same hash, asides – we now the solution which will display you something similar in the list of your monitored JVMs:Machine JVM Up sinceartemis.staging Self Service (WAR) 07.07.2014 11:45artemis.staging E-Shop (WAR) 08.07.2014 18:30aramis.live com.ringbearer.BatchProcessor 01.01.2001 00:00  So, we were actually able to come up with a decent solution and fallback to manual naming with -Dplumbr.application.name parameter if everything else fails. One question still remains – why is something so commonly required by system administrators completely missing from the JVM tooling and APIs?Reference: Identifying JVM – trickier than expected from our JCG partner Ivo Mägi at the Plumbr Blog blog....
agile-logo

10 Tips for Creating an Agile Product Strategy with the Vision Board

Summary This post does what its title says: It shares my recommendations for creating an agile product strategy using the Vision Board. It addresses readers who want to find out more about using a product strategy in an agile, dynamic environment and readers who want to get better at using the Vision Board.          Start with What You Know NowTraditionally, a product strategy is the result of months of market research and business analysis work. It is intended to be factual, reliable, and ready to be implemented. But in an agile, dynamic environment a product strategy is best created differently: Start with your idea, state the vision behind it, and capture your initial strategy. Then identify the biggest risk or the crucial leap-of-faith assumption, address it, and change and improve your strategy. Repeat this process until you are confident that your product strategy is valid.This iterative approach, piloted by Lean Startup, helps you acquire the new knowledge fast and in a goal-oriented, focused manner addressing the key risks or assumptions. It avoids the danger of carrying out too much and too little research, reduces time-to-market, and increases your chances of creating a successful product.Focus on what Matters MostThe term product strategy means different things to different people, and strategies come in different shapes and sizes. While that’s perfectly fine, an initial product strategy that forms the basis for subsequent correction and refinement cycles should focus on what matters most: the market, the value proposition, the product’s unique selling points, and the business goals. This is where my Vision Board comes in. I have designed it as the simplest thing that could possibly work to capture the vision and the product strategy. You can download it from romanpichler.com/tools/vision-board for free.For an introduction to the Vision Board, please see my post “The Product Vision Board”.Create the Product Strategy CollaborativelyA great way to create your product strategy is to employ a collaborative workshop. Invite the key people required to develop, market, sell and service your product and the senior management sponsor. Such a workshop generates early buy-in, creates shared ownership, and leverages the collective knowledge and creativity of the group. Selling an existing vision and product strategy can be challenging. Co-creation is often the better option.Your initial Vision Board has to be good enough to create a shared understanding of your vision and initial strategy and to identify the biggest risk so you can start re-working your board. But don’t spend too much time on it and don’t try to make it perfect. Your board will change as you correct, improve and refine it.Let your Vision Guide youThe product vision is the very reason for creating your product: It describes your overarching goal. The vision also forms the basis of your product strategy as the path to reach your overall goal. As the vision is so important, you should capture it before you describe your strategy.Here are four tips to help you capture your vision:Make sure that your vision does not restate your product idea but goes beyond it. For instance, the idea for this post is to write about creating an agile product strategy, but my vision is to help you develop awesome and successful products. Choose a broad vision, a vision that engages people and that enables you to pivot – to change the strategy while staying true to your vision. Make your vision statement concise; capture it in one or two sentences; and ensure that it is clear and easy to understand. Try to come up with a motivating and inspiring vision that helps unite everyone working on the product. Choosing an altruistic vision, a vision that focuses on the benefits created for others, can help you with this.Put the Users FirstOnce you have captured your vision, work on your strategy by filling in the lower sections of the Vision Board from left to right. Start with the “Target Group”, the people who should use and buy your product rather than thinking about the cool, amazing product features or the smart business model that will monetise the product. While both aspects are important, capturing the users and customers and their needs forms the basis for making the right product and business model decisions.While it’s tempting to think of all the people who could possibly benefit from your product, it is more helpful to choose a clear-cut and narrow target group instead. Describe the users and customers as clearly as you can and state the relevant demographic characteristics. If there are several segments that your product could serve then choose the most promising one. Working with a focused target group makes it easier to test your assumptions, to select the right test group and test method, and to analyse the resulting feedback and data. If it turns out that you have picked the wrong group or made the segment is too small then simply pivot to a new or bigger one. A large or heterogeneous target group is usually difficult to test. What’s more, it leads to many diverse needs, which make it difficult to determine a clear and convincing value proposition and therefore to market and sell the product.Clearly State the Main Problem or BenefitOnce you have captured your target users and customers, describe their needs. Consider why they would purchase and use your product. What problem will your product solve, what pain or discomfort will it remove, what tangible benefit will it create?If you identify several needs, then determine the main problem or the main benefit, for instance, by putting it at the top of the section. This helps you test your ideas and create a convincing value proposition. I find that if I am not able to clearly describe the main problem or benefit, I don’t really understand why people would want to use and to buy a product.Describe the Essence of your ProductOnce you have captured the needs, use the “Product” section to describe your actual product idea. State the three to five key features of your product, those features that make the product desirable and that set it apart from its competitors. When capturing the features consider not only product functionality but also nonfunctional qualities such as performance and interoperability, and the visual design.Don’t make the mistake of turning this section into a product backlog. The point is not to describe the product comprehensively or in a great amount of detail but to identify those features that really matter to the target group.State your Business Goals and Key Business Model ElementsUse the “Value” section to state your business goals such as creating a new revenue stream, entering a new market, meeting a profitability goal, reducing cost, developing the brand, or selling another product. Make explicit why it is worthwhile for your company to invest in the product. Prioritise the business goals and state them in the order of their importance. This will guide your efforts and help you choose the right business model.Once you have captured the business goals, state the key elements of your business model including the main revenue sources and cost factors. This is particularly important when you work with a new or significantly changed business model.Extend your BoardThe Vision Board’s simplicity is one of its assets, but it can sometimes become restricting: The Product and the Value sections can get crowded as the board does not separately capture the competitors, the partners, the channels, the revenue sources, the cost factors, and other business model elements. Luckily there is a simple solution: Extend your board and add further sections, for instance, “Competitors”, “Channels”, “Revenue Streams”, and “Cost Factors”, or download an extended version from my website.But before using an extended Vision Board make sure that you understand who your customers and users are and why they would buy and use the product. There is no point in worrying about the marketing and the sales channels or the technologies if you are not confident that you have identified a problem that’s worthwhile addressing. Additionally, a more complex board usually contains more risks and assumptions. This makes it harder to identify the biggest risk and leap-of-faith assumption.Put it to the TestCapturing your vision and initial product strategy on the Vision Board is great. But it’s only the beginning of a journey in search of a valid strategy, as your initial board is likely to be wrong. After all, you have based the board on what you know now rather than extensive market research work. You should therefore review your initial Vision Board carefully, identify its critical risks or leap-of-faith assumptions, and select the most crucial risk or assumption. Determine the right test group, for instance, selected target users, and the right test method such as problem interviews. Carry out the test, analyse the feedback or data collected, and change your Vision Board with the newly gained knowledge as the picture below shows.If you find that the key risks and assumptions hard to identify then your board may be too vague. If that’s the case then narrow down the target group, select the main problem or benefit, reduce the key features to no more than five, identify the main business benefit, and remove everything else. Your board may significantly change as you iterate over your strategy, and you may have to pivot, to choose a different strategy to make your vision come true. If your Vision Board does not change at all then you should stop and reflect: Are you addressing the right risks in the right way and are you analysing the feedback and data effectively?Reference: 10 Tips for Creating an Agile Product Strategy with the Vision Board from our JCG partner Roman Pichler at the Pichler’s blog blog....
agile-logo

How Pairing & Swarming Work & Why They Will Improve Your Products

If you’ve been paying attention to agile at all, you’ve heard these terms: pairing and swarming. But what do they mean? What’s the difference? When you pair, two people work together to finish a piece of work. Traditionally, two developers paired. The “driver” wrote the piece of work. The other person, the “navigator,” observed the work, providing review, as the work was completed. I first paired as a developer in 1982 (kicking and screaming). I later paired in the late 1980′s as the tester in several developer-tester pairs. I co-wrote Behind Closed Doors: Secrets of Great Management with Esther Derby as a pair.   There is some data that says that when we pair, the actual coding takes about 15-20% longer. However, because we have built-in code review, there is much less debugging at the end. When Esther and I wrote the book, we threw out the original two (boring) drafts, and rewrote the entire book in six weeks. We were physically together. I had to learn to stop talking. (She is very funny when she talks about this.) We both had to learn each others’ idiosyncrasies about indentations and deletions when writing. That’s what you do when you pair. However, this book we wrote and published is nothing like what the original drafts were. Nothing. We did what pairs do: We discussed what we wanted this section to look like. One of us wrote for a few minutes. That person stopped. We changed. The other person wrote. Maybe we discussed as we went, but we paired. After about five hours, we were done for the day. Done. We had expended all of our mental energy. That’s pairing. Two developers. One work product. Not limited to code, okay? Now, let’s talk about swarming. Swarming is when the entire team says, “Let’s take this story and get it to done, all together.” You can think of swarming as pairing on steroids. Everyone works on the same problem. But how? Someone will have to write code. Someone will have to write tests. The question is this: in what order and who navigates? What does everyone else do? When I teach my agile and lean workshop, I ask the participants to select one feature that the team can complete in one hour. Everyone groans. Then they do it. Some teams do it by having the product owner explain what the feature is in detail. Then the developers pair and the tester(s) write tests, both automated and manual. They all come together at about the 45-minute mark. They see if what they have done works. (It often doesn’t.) Then the team starts to work together, to really swarm. “What if we do this here? How about if this goes there?” Some teams work together from the beginning. “What is the first thing we can do to add value?” (That is an excellent question.) They might move into smaller pairs, if necessary. Maybe. Maybe they need touchpoints every 15-20 minutes to re-orient themselves to say, “Where are we?” They find that if they ask for feedback from the product owner, that works well. If you first ask, “What is the first thing we can do to add value and complete this story?” you are probably on the right track. Why Do Pairing and Swarming Work So Well? Both pairing and swarming:Build feedback into development of the task at hand. No one works alone. Can the people doing the work still make a mistake? Sure. But it’s less likely. Someone will catch the mistake. Create teamwork. You get to know someone well when you work with them that intensely. Expose the work. You know where you are. Reduce the work in progress. You are less likely to multitask, because you are working with someone else. Encourage you to take no shortcuts, at least in my case. Because someone was watching me, I was on my best professional behavior. (Does this happen to you, too?)How do Pairing and Swarming Improve Your Products? The effect of pairing and swarming is what improves your products. The built-in feedback is what creates less debugging downstream. The improved teamwork helps people work together. When you expose the work in progress, you can measure it, see it, have no surprises. With reduced work in progress, you can increase your throughput. You have better chances for craftsmanship. You don’t have to be agile to try pairing or swarming. You can pair or swarm on any project. I bet you already have, if you’ve been on a “tiger team,” where you need to fix something for a “Very Important Customer” or you have a “Critical Fix” that must ship ASAP. If you had all eyes on one problem, you might have paired or swarmed. If you are agile, and you are not pairing or swarming, consider adding either or both to your repertoire, now.Reference: How Pairing & Swarming Work & Why They Will Improve Your Products from our JCG partner Johanna Rothman at the Managing Product Development blog....
enterprise-java-logo

New in JAX-RS 2.0 – @BeanParam annotation

JAX-RS is awesome to say the least and one of my favorites! Why?Feature rich Intuitive (hence the learning curve is not as steep) Easy-to-use and develop with Has great RIs – Jersey, RestEasy etcThere are enough JAX-RS fans out there who can add to this! JAX-RS 2.0 is the latest version of the JSR 311 specification and it was released along with Java EE 7.   Life without @BeanParam Before JAX-RS 2.0, in order to pass/inject information from an HTTP request into JAX-RS resource implementation methods, one couldInclude multiple method arguments annotated with @FormParam, @PathParam, @QueryParam etc   Or, have a model class backed by JAXB/JSON or a custom MessageBodyReader implementation for JAX-RS Provider to be able to unmarshall the HTTP message body to a Java object – read more about this in one of my previous posts    This means that something like a HTML5 based client would need to extract the FORM input, convert it into JSON or XML payload and then POST it over the wire. Simplification in JAX-RS 2.0 This process has been simplified by introduction of the @BeanParam annotation. It helps inject custom value/domain/model objects into fields or method parameters of JAX-RS resource classes. In case you want to refer to the code (pretty simple) or download the example/run it yourself, here is the GitHub link All we need to do is, annotate the fields of the model (POJO) class with the injection annotations that already exist i.e. @PathParam, @QueryParam, @HeaderParam, @MatrixParam etc – basically any of the @xxxParam metadata types and  Make sure that we include the @BeanParam annotation while injecting a reference variable of this POJO (only on METHOD, PARAMETER or FIELD).  JAX-RS provider automatically constructs and injects an instance of your domain object which you can now use within your methods. Just fill in the form information and POST it!     That’s it. . . Short and Sweet! Keep Coding!Reference: New in JAX-RS 2.0 – @BeanParam annotation from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
software-development-2-logo

Keeping things DRY: Method overloading

A good clean application design requires discipline in keeping things DRY: Everything has to be done once. Having to do it twice is a coincidence. Having to do it three times is a pattern. — An unknown wise manNow, if you’re following the Xtreme Programming rules, you know what needs to be done, when you encounter a pattern: refactor mercilesslyBecause we all know what happens when you don’t:  Not DRY: Method overloading One of the least DRY things you can do that is still acceptable is method overloading – in those languages that allow it (unlike Ceylon, JavaScript). Being an internal domain-specific language, the jOOQ API makes heavy use of overloading. Consider the type Field (modelling a database column): public interface Field<T> {// [...]Condition eq(T value); Condition eq(Field<T> field); Condition eq(Select<? extends Record1<T>> query); Condition eq(QuantifiedSelect<? extends Record1<T>> query);Condition in(Collection<?> values); Condition in(T... values); Condition in(Field<?>... values); Condition in(Select<? extends Record1<T>> query);// [...]} So, in certain cases, non-DRY-ness is inevitable, also to a given extent in the implementation of the above API. The key rule of thumb here, however, is to always have as few implementations as possible also for overloaded methods. Try calling one method from another. For instance these two methods are very similar: Condition eq(T value); Condition eq(Field<T> field); The first method is a special case of the second one, where jOOQ users do not want to explicitly declare a bind variable. It is literally implemented as such: @Override public final Condition eq(T value) { return equal(value); }@Override public final Condition equal(T value) { return equal(Utils.field(value, this)); }@Override public final Condition equal(Field<T> field) { return compare(EQUALS, nullSafe(field)); }@Override public final Condition compare(Comparator comparator, Field<T> field) { switch (comparator) { case IS_DISTINCT_FROM: case IS_NOT_DISTINCT_FROM: return new IsDistinctFrom<T>(this, nullSafe(field), comparator);default: return new CompareCondition(this, nullSafe(field), comparator); } } As you can see:eq() is just a synonym for the legacy equal() method equal(T) is a more specialised, convenience form of equal(Field<T>) equal(Field<T>) is a more specialised, convenience form of compare(Comparator, Field<T>) compare() finally provides access to the implementation of this APIAll of these methods are also part of the public API and can be called by the API consumer, directly, which is why the nullSafe() check is repeated in each method. Why all the trouble? The answer is simple.There is only very little possibility of a copy-paste error throughout all the API. … because the same API has to be offered for ne, gt, ge, lt, le No matter what part of the API happens to be integration-tested, the implementation itself is certainly covered by some test. This way, it is extremely easy to provide users with a very rich API with lots of convenience methods, as users do not want to remember how these more general-purpose methods (like compare()) really work.The last point is particularly important, and because of risks related to backwards-compatibility, not always followed by the JDK, for instance. In order to create a Java 8 Stream from an Iterator, you have to go through all this hassle, for instance: // Aagh, my fingers hurt... StreamSupport.stream(iterator.spliterator(), false); // ^^^^^^^^^^^^^ ^^^^^^^^^^^ ^^^^^ // | | | // Not Stream! | | // | | // Hmm, Spliterator. Sounds like | | // Iterator. But what is it? ---------+ | // | // What's this true and false? | // And do I need to care? ------------------------+ When, intuitively, you’d like to have: // Not Enterprise enough iterator.stream(); In other words, subtle Java 8 Streams implementation details will soon leak into a lot of client code, and many new utility functions will wrap these things again and again. See Brian Goetz’s explanation on Stack Overflow for details. On the flip side of delegating overload implementations, it is of course harder (i.e. more work) to implement such an API. This is particularly cumbersome if an API vendor also allows users to implement the API themselves (e.g. JDBC). Another issue is the length of stack traces generated by such implementations. But we’ve shown before on this blog that deep stack traces can be a sign of good quality. Now you know why. Takeaway The takeaway is simple. Whenever you encounter a pattern, refactor. Find the most common denominator, factor it out into an implementation, and see that this implementation is hardly ever used by delegating single responsibility steps from method to method. By following these rules, you will:Have less bugs Have a more convenient APIHappy refactoring!Reference: Keeping things DRY: Method overloading from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
javafx-logo

FX Playground

Introduction FX Playground is a JavaFX-based prototyping tool or live editor that eliminates the step of compiling Java code. This concept isn’t new, for instance the Web world there are many HTML5 playgrounds that offer online editors that enable developers to quickly prototype or experiment with various JavaScript libraries. This allows the developer to focus on visualizations or UI details without needing to set-up an IDE project or mess with files. Even older (pre-dating) than playgrounds are REPLs (Read Eval Print Loop) where dynamic languages such as Groovy, Python, Ruby, etc. provide an interactive interpreter command line tool to allow developers to quickly script code to be executed. Scala is a compiled language, but also provides a REPL tool. After finishing the book JavaFX 8 Introduction by Example I noticed each example was created as separate NetBeans projects which seemed a little overkill for small examples. Because the book is based on Java the language each program needed to be compiled (via javac) prior to execution. Larger projects will typically need to be set-up with a proper classpath and resources in the appropriate directory locations. Even larger projects will also need dependencies which typically reside on Maven repositories. JavaOne 2014 Based on timing I was able to submit a talk regarding JavaFX based playgrounds just in time. After awhile I was pleasantly surprised that my proposal (talk) was accepted. You can check out the session here. Also, I will be presenting with my good friend Gerrit Grunwald (@hansolo_). So, be prepared to see awe-inspiring demos. Since the talk is a BoF (birds of a feather) the atmosphere will be low-key and very casual. I hope to see you there! The JavaOne talk is titled “JavaFX Coding Playground (JavaFX-Based Live Editor Tool) [BOF2730]“.  Based on the description you’ll find that the tool will be using the NEW! Nashorn (JavaScript) engine to interact with JavaFX primitives. The figure below depicts the FX Playground tool’s editor windows and a JavaFX Display area. Starting clockwise at the lower left is the code editor window allowing the user to use JavaScript (Nashorn) to interact with nodes. Next, is the JavaFX FXML editor window allowing the user to use FXML (upper left). The FXML window is an optional.  In the upper right, you will notice the JavaFX CSS editor window allowing you to style nodes on the display surface. Lastly, to the bottom right is the output area or better known as the DISPLAY_SURFACE.  FX Playground in Action Because FX Playground is still in development I will give you a glimpse of some demos that I’ve created on Youtube. The following are examples with links to videos.FXPlayground3d - Nashorn and JavaFX 3D FX Playground now has a settings slide out panel - Nashorn, Rectangle w/CSS, and MediaView FX Playground Using Enzo library - Nashorn and Enzo Library FX Playground Testing Video w/ MediaView and WebView - Nashorn, MediaView and WebViewRoadmap There are plans to opensource the code, but for now there is much needed functionality before public consumption. The following features are a work in progress:Make use of FXML editor window. Pop out the display panel into its own window Save, SaveAs, and Load Playgrounds Build software to be an executable for tool users. (90% done) Make the tool capable of using other languages (JSR 223)I want to thank Oracle corp. especially the following engineers who helped me (some of the engineers below are not Oracle employees):David Grieve – @dsgrieve Jim Laskey – @wickund Sundararajan Athijegannathan  – @sundararajan_a Danno Ferrin – @shemnon Sean Phillips – @SeanMiPhillips Mark Heckler – @MkHeck Jose Perada – @JPeredaDnr Gerrit Grunwald – @hansolo_ Jim Weaver - @JavaFXpertResourcesCarlFX’s Channel - https://www.youtube.com/channel/UCNBYRHaYk9mlTmn9oAPp1VA 7 of the Best Code Playgrounds - http://www.sitepoint.com/7-code-playgrounds NetBeans – https://www.netbeans.org JavaFX 8 Introduction by Example - http://www.apress.com/9781430264606 Nashorn - https://wiki.openjdk.java.net/display/Nashorn/Main Enzo - https://bitbucket.org/hansolo/enzo/wiki/Home Harmonic Code - http://harmoniccode.blogspot.com/Reference: FX Playground from our JCG partner Carl Dea at the Carl’s FX Blog blog....
career-logo

How to Evaluate Job Offers

At some point in a career, many will be in a position to decide between multiple job offers from different companies – or at worst having to decide between accepting a new job or staying put. When starting to compare offers, it is common for the recipient to focus on the known quantities (i.e. salary, bonus, etc.) and perhaps a couple additional details that are generally considered more subjective (work environment, technologies). In order to make a truly wise choice it is also useful to include less obvious factors as well as future considerations, as those generally will have a much stronger influence on career earnings and success. These are harder to predict, but must enter into your decision unless your sole objective is to meet some immediate short-term need.The easy part The most common components factoring into gross compensation are: Cash compensation (salary, bonus, sign-on) – If the bonus is listed as guaranteed, the figure can be lumped into salary. Most bonuses are not guaranteed, but rather are tied to personal and/or company goals being met. Some firms or individual employees are willing to provide data on bonus history. Sign-ons are used to sweeten an offer or to rectify a potential cost the new hire would incur by leaving their job, such as an unpaid bonus. Healthcare premiums and contributions – Offer letters typically do not list employee out-of-pocket insurance cost, and personal circumstances may weigh heavily on how one values health insurance. Employer contribution can vary from 50-100% while other companies offer employee-only contribution (no contribution towards spouse/partner/child), which can result in a total compensation difference of a few percent. 401k or retirement plans – Employee match and contribution to these plans can be significant. Consider both the dollar amounts and the vesting schedules. Education reimbursement – If considering a return to school this policy could make a difference. Paid time off – Although the real value any employee places on time off will vary, the dollar value of each day of PTO can be estimated using a formula. (Annual salary / estimated annual work hours) x work hours in a day Many candidates make the mistake of basing their decisions with too much weight placed on base salary. This may be attributed to our emotional attachment to numbers and compensation “milestones” (usually round numbers), the perception of status that results from salary, and the inability of candidates to accurately gather and calculate the details of a comprehensive package. A friend might tell you about her 100K salary, but how often do you hear someone independently offer up that they pay 10K per year for health insurance and only get one week of vacation? Additional considerations The details above are all easily obtained, quantified, and require no interpretation. Everything from this point on will require a bit of investigation as well as some educated guessing. Expected hours – To put a value on time for offer comparison, a quick calculation to convert salary into dollars per hour can be a telling figure. All else being equal, that 80K offer with a 40 hour work week is more per hour than the 100K offer at 55 hours. Estimates of work hours may not be accurate, so multiple data sources can help. Commute time/cost and possibility for remote work – Distance may not be a reliable predictor of commute time or cost, and mundane details such as gas efficiency will quickly add up when you consider the trip is repeated 400+ times a year. Mass transit inefficiencies and delays have a cost to commuters as well. The ability to work remotely, even for one or two days a week, makes some difference. Travel – This can be viewed as a positive or a negative depending on the worker. Consider any hidden expenses that may not be reimbursed, such as child or pet care costs. Perks – Company provided phone or internet, gym membership, and office meals/snacks are not things job seekers expect, yet could provide thousands of dollars in value. Self-improvement budget – Some companies may be willing to foot the bill for training or conferences that the employee would have paid for anyway. Forecasting and speculation The most vital characteristics contributing to a job’s long-term value are often hidden and unsupported by reliable data. Establishing the present day value of any one job is somewhat complex, and trying to forecast future values requires speculation. Future marketability – This is a key factor in career compensation, yet is often overlooked when the temptation of short-term gains are presented. The consideration of future marketability is most critical for new grads or junior level employees, who are (unfortunately) often in debt and easily influenced by short-term gains and cash compensation. What skills will be obtained in a job, and to what extent will these new skills increase market value? Will having a company’s name on a résumé (whether by associated prestige or number of direct competitors) create some additional demand for services? If a goal is to maximize lifetime earnings, one could theorize that a year of unpaid work at a place like Google or Facebook is preferable to two years of paid work at many other companies. Promotions and raises – Job offers only include starting salary/title. How, and how often, does a company evaluate employees for salary increases, and what amounts might be expected for performers? Do they tend to promote from within or hire from outside? Is there a career path and is there a point where compensation plateaus? Stress and satisfaction – It’s impossible to place a hard value on work stress or job satisfaction, and the amount of either is difficult to predict. Satisfaction, work/life balance, and stress can impact both health and productivity, which could also contribute to marketability. Stock/stock options  – The number of factors that influence the potential value is too long to list. Vesting schedules may have substantial impact on perceived value if a long tenure isn’t expected. Environment, team, management – Companies try to make a strong positive impression during interviews, but that image doesn’t always accurately reflect day-to-day operations. Younger workers should place considerable weight on whether there are team members to learn from and mentors who are both available and willing to guide. Employees with long tenures will have insight, though the opinion of more recent hires may be more relevant to anyone considering an offer. Conclusions Job change decisions are complex, and tough choices usually end up coming from the gut. The immediate results of a choice are easily identified and quantified, but the more important long-term ramifications require research, interpretation, and a bit of conjecture. When combining all of the smaller elements of a compensation package, the highest salary will not always be the most lucrative offer.Reference: How to Evaluate Job Offers from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
java-logo

Building extremely large in-memory InputStream for testing purposes

For some reason I needed extremely large, possibly even infinite InputStream that would simply return the samebyte[] over and over. This way I could produce insanely big stream of data by repeating small sample. Sort of similar functionality can be found in Guava: Iterable<T> Iterables.cycle(Iterable<T>) and Iterator<T> Iterators.cycle(Iterator<T>). For example if you need an infinite source of 0 and 1, simply sayIterables.cycle(0, 1) and get 0, 1, 0, 1, 0, 1... infinitely. Unfortunately I haven’t found such utility forInputStream, so I jumped into writing my own. This article documents many mistakes I made during that process, mostly due to overcomplicating and overengineering straightforward solution.   We don’t really need an infinite InputStream, being able to create very large one (say, 32 GiB) is enough. So we are after the following method: public static InputStream repeat(byte[] sample, int times)It basically takes sample array of bytes and returns an InputStream returning these bytes. However when sampleruns out, it rolls over, returning the same bytes again – this process is repeated given number of times, untilInputStream signals end. One solution that I haven’t really tried but which seems most obvious: public static InputStream repeat(byte[] sample, int times) { final byte[] allBytes = new byte[sample.length * times]; for (int i = 0; i < times; i++) { System.arraycopy(sample, 0, allBytes, i * sample.length, sample.length); } return new ByteArrayInputStream(allBytes); }I see you laughing there! If sample is 100 bytes and we need 32 GiB of input repeating these 100 bytes, generated InputStream shouldn’t really allocate 32 GiB of memory, we must be more clever here. As a matter of fact repeat()above has another subtle bug. Arrays in Java are limited to 231-1 entries (int), 32 GiB is way above that. The reason this program compiles is a silent integer overflow here: sample.length * times. This multiplication doesn’t fit in int. OK, let’s try something that at least theoretically can work. My first idea was as follows: what if I create manyByteArrayInputStreams sharing the same byte[] sample (they don’t do an eager copy) and somehow join them together? Thus I needed some InputStream adapter that could take arbitrary number of underlying InputStreams and chain them together – when first stream is exhausted, switch to next one. This awkward moment when you look for something in Apache Commons or Guava and apparently it was in the JDK forever… java.io.SequenceInputStreamis almost ideal. However it can only chain precisely two underlying InputStreams. Of course sinceSequenceInputStream is an InputStream itself, we can use it recursively as an argument to outerSequenceInputStream. Repeating this process we can chain arbitrary number of ByteArrayInputStreams together: public static InputStream repeat(byte[] sample, int times) { if (times <= 1) { return new ByteArrayInputStream(sample); } else { return new SequenceInputStream( new ByteArrayInputStream(sample), repeat(sample, times - 1) ); } }If times is 1, just wrap sample in ByteArrayInputStream. Otherwise use SequenceInputStream recursively. I think you can immediately spot what’s wrong with this code: too deep recursion. Nesting level is the same as timesargument, which will reach millions or even billions. There must be a better way. Luckily minor improvement changes recursion depth from O(n) to O(logn): public static InputStream repeat(byte[] sample, int times) { if (times <= 1) { return new ByteArrayInputStream(sample); } else { return new SequenceInputStream( repeat(sample, times / 2), repeat(sample, times - times / 2) ); } }Honestly this was the first implementation I tried. It’s a simple application of divide and conquer principle, where we produce result by evenly splitting it into two smaller sub-problems. Looks clever, but there is one issue: it’s easy to prove we create t (t =times) ByteArrayInputStreams and O(t) SequenceInputStreams. While sample byte array is shared, millions of various InputStream instances are wasting memory. This leads us to alternative implementation, creating just one InputStream, regardless value of times: import com.google.common.collect.Iterators; import org.apache.commons.lang3.ArrayUtils;public static InputStream repeat(byte[] sample, int times) { final Byte[] objArray = ArrayUtils.toObject(sample); final Iterator<Byte> infinite = Iterators.cycle(objArray); final Iterator<Byte> limited = Iterators.limit(infinite, sample.length * times); return new InputStream() { @Override public int read() throws IOException { return limited.hasNext() ? limited.next() & 0xFF : -1; } }; }We will use Iterators.cycle() after all. But before we have to translate byte[] into Byte[] since iterators can only work with objets, not primitives. There is no idiomatic way to turn array of primitives to array of boxed types, so I use ArrayUtils.toObject(byte[]) from Apache Commons Lang. Having an array of objects we can create aninfinite iterator that cycles through values of sample. Since we don’t want an infinite stream, we cut off infinite iterator using Iterators.limit(Iterator<T>, int), again from Guava. Now we just have to bridge fromIterator<Byte> to InputStream - after all semantically they represent the same thing. This solution suffers two problems. First of all it produces tons of garbage due to unboxing. Garbage collection is not that much concerned about dead, short-living objects, but still seems wasteful. Second issue we already faced previously: sample.length * times multiplication can cause integer overflow. It can’t be fixed becauseIterators.limit() takes int, not long - for no good reason. BTW we avoided third problem by doing bitwise andwith 0xFF - otherwise byte with value -1 would signal end of stream, which is not the case. x & 0xFF is correctly translated to unsigned 255 (int). So even though implementation above is short and sweet, declarative rather than imperative, it’s too slow and limited. If you have a C background, I can imagine how uncomfortable you were seeing me struggle. After all the most straightforward, painfully simple and low-level implementation was the one I came up with last: public static InputStream repeat(byte[] sample, int times) { return new InputStream() { private long pos = 0; private final long total = (long)sample.length * times;public int read() throws IOException { return pos < total ? sample[(int)(pos++ % sample.length)] : -1; } }; }GC free, pure JDK, fast and simple to understand. Let this be a lesson for you: start with the simplest solution that jumps to your mind, don’t overengineer and don’t be too smart. My previous solutions, declarative, functional, immutable, etc. – maybe they looked clever, but they were neither fast nor easy to understand. The utility we just developed was not just a toy project, it will be used later in subsequent article.Reference: Building extremely large in-memory InputStream for testing purposes from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
software-development-2-logo

Test Attribute #4 – Accuracy

This is the 4th post on test attributes that were described in the now even more famous “How to test your tests” post. If you want training and/or coaching on testing, contact me. Accuracy is about pinpointing the location of the failing code. If we know where the offending code is, we can easily analyze what problem we caused, and move on to fixing the problem. The trivial example is tests that check different methods. Of course, if one of them fails, we know where to look. Here’s another simple case, on the same method. We have a PositiveCalculator class that its Add method adds two positive numbers,  or throws an exception if they are not so positive: public int Add(int a, int b) { if ((a < 0) || (b < 0)) throw new ArgumentException();return a + b; } We can then write the following tests: [Test]public void AddTwoPositiveNumbers_GetResult() { PositiveCalculator calculator = new PositiveCalculator(); Assert.That(calculator.Add(2, 2), Is.EqualTo(4)); }[Test]public void AddTwoNegativeNumbers_Exception() { PositiveCalculator calculator = new PositiveCalculator(); Assert.Throws<ArgumentException> (() => calculator.Add(-5, -5)); } Looking at the tests, we already see they check two different behaviors. When we combine what we read from the tests, and the tested code, it’s easy to relate the parts of the code to each tests. So if one of them fails, we’ll know where to look. Unfortunately code doesn’t always look like this.  It usually starts like that, but then grows to monster-size functions. When it does, it either becomes untestable, or incurs tests that are large, overlap each other, and test multiple things. None of those are accurate tests. So what can we do? Let’s start with the preventive measure: Don’t let the code grow. Be merciless about keeping methods small, and use The Single Responsibility Principle to extract code into smaller, easy testable and accurate functions. But I didn’t write this code! How do I make my tests accurate? Here’s what you can do. Now that you have a test, or a bunch of them, it’s time to make use of them: Start refactoring the code. Having the tests in place, will tell you if you’re breaking stuff, and it’s very easy going back to working mode, because refactoring is also done in small steps. Once you have broken the code into smaller pieces, you can now write smaller tests, which give  you the accuracy that the bigger tests didn’t have. In fact, you might want to replace the big tests with some smaller ones, if they give better information and performance for the same coverage. We can also make the tests more accurate with the following methods:One Assert per test – When you check only one thing, chances are that your test is more accurate than when checking multiple things. If you have more Asserts in your tests, break them into multiple tests. Test shorter scenarios – In legacy code, it’s tempting to test large scenarios, because the code does a lot, and does not expose entry points to single operations. Try to test shorter scenarios rather than long ones, and smaller objects rather than large one. Try to break long scenarios into short ones. If you use the big tests to refactor the code, you can then write smaller, more accurate tests. Mock unrelated stuff- If you have dependencies that do multiple things, and therefore make longer scenarios, mock them. You’ll make the test more accurate because it now runs through the relevant code you’re interested in. Check the coverage – Visually if possible. IDEs and tools that show visual coverage on the code are awesome, because they add another visual clue to where the impacted code is. On trivial code they don’t matter much, but on complex code, you can compare paths of different tests, and by applying some elimination, you can find out where the problems are. You can also use the visual paths as feedback to how accurate your tests are, and if they aren’t, make them more so.Accuracy helps us fix problems quickly. But it’s definitely not so easy to come by, because it depends very much on the tested code. However, using the combination of the methods I suggested, and making use of working to test to refactor and simplifications, test accuracy is definitely within reach.Reference: Test Attribute #4 – Accuracy from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
java-logo

JAXB – A Newcomer’s Perspective, Part 1

I know what a lot of you are already thinking, so let’s get this out of the way: “JAXB? As in XML? Come on, all the cool kids are using JSON.” The “XML vs. JSON” debate and the many arguments that contribute to it are pretty well documented; I won’t spend a lot of time rehashing them here. I believe that each format has its uses, but even if you’re in the “no XML ever” camp you still might want to read on, as the observations and techniques I discuss should be equally applicable to JSON data binding with Jackson (or similar tools). In Part 1 I describe a simple usage pattern that pairs JAXB’s data binding capabilities with JPA. Of course the interactions between the two aren’t always so simple, so in Part 2 I’ll look at how to address a few of the complications you can expect to encounter. The Problem On my current project, we’re building a suite of Java applications to manage the staging of materials in a manufacturing process. We decided to build “from the outside in” to facilitate user-facing demos after any given iteration. So in the first iteration we built some of the screens with hard-coded dummy data; then with each successive iteration we added more infrastructure and logic behind the screens. To make early demos more interactive, we decided to create a “test console” for the central app. A person typing commands at the console can simulate the behavior of the “net yet implemented” parts of the system. The cost to build the console is modest thanks to tools like Antlr 4 that make command parsing simple, and we see long-term value in using the console for testing and diagnostics. We reached a point where the system’s behavior needed to be driven by data from another app. The “other app” that’s responsible for creating and maintaining this data hasn’t been written and won’t be for some time, so we needed a way to load sample data through the console. Options Essentially our task was to build (or leverage) a data loader. We settled on XML as a likely format for the file, and then rifled through the list of tools with which our team would generally be familiar. DBUnit has data-loading capabilities (intended for setting up repeatable test conditions). It supports two different XML Schemas (“flat” and “full”), each of which is clearly table-oriented. It also provides for substitution variables, so we could build template files and allow the console input to set final values. I harbor some reservations about using a unit testing tool in this way, but of the arrows in the team’s quiver it could be the closest fit. For better or worse, my first attempt to apply it was not successful (turns out I was looking at the wrong part of the DBUnit API) which got me thinking a little further outside the box. We already had a way – namely Hibernate – to push data into our database; so when I phrased the problem in terms of “how to create entity instances from XML documents,” JAXB emerged as an obvious contender. I was pleased to discover that Java ships with a JAXB implementation, so I set to work trying it out. A Newcomer’s Perspective Never having used JAXB, I started with a little research. Much of the material I found dealt with generating Java classes from an XML schema. This isn’t surprising – it’s a big part of what the tool can do – but in my case, I wanted to bind data to my existing Hibernate-mapped domain classes. And that leads to something that may be a bit more surprising: some of the most comprehensive tutorials I found didn’t seem to anticipate this usage. I think this is a good demonstration of the way that your starting assumptions about a tool can shape how you think about it and how you use it. If you start by comparing JAXB with DOM, as several online resources do, then it may be natural to think of the output of an unmarshalling operation as a document tree that needs to be traversed and processed, perhaps copying relevant data to a parallel hierarchy of domain objects. The traversal and processing may be easier (at least conceptually) than it would with a DOM tree, but as a tradeoff you have to keep the two class hierarchies straight, which calls for careful naming conventions. There are no doubt use cases where that is exactly what is necessary, but the tool is not limited to only that approach. If you instead start by comparing JAXB with Hibernate – as a means of loading data from an external source into your domain objects – then it is natural to ask “why can’t I use one set of domain objects for both?” At least some of the time, with a little caution, you can. The Simple Case In these examples I’ll use the JAXB API directly. We only need to make a few simple calls to accomplish our task, so this is reasonably straightforward. It is worth noting that Spring does offer JAXB integration as well, and especially if you use Spring throughout your app, the configuration approach it offers may be preferable. Suppose you have an EMPLOYEE table. Every employee has a unique numeric ID and a name. If you use annotations for your ORM mapping data, you might have a domain class like this: @Entity @Table(name=”EMPLOYEE”) public class Employee { @Id @Column(name=”EMPLOYEE_ID”) private Integer employeeId; @Column(name=”FIRST_NAME”) private String firstName; @Column(name=”LAST_NAME”) private String lastName; // … getters and setters … }; Now we want to let the user provide an Employee.xml data file. Supposing we don’t have a specific XML Schema with which we need to comply, we might as well see what JAXB’s default handling of the class would be. So, we’ll start with the minimal steps to “marshal” an Employee instance into an XML document. If we’re happy with how the resulting document looks, we’ll swap in the unmarshalling code; if not, we can look into customizing the mapping. First we need a JAXBContext instance configured to work with our domain class(es). JAXBContext jaxb = JAXBContext.newInstance(Employee.class); As an aside, instead of passing the class object(s) to newInstance(), we could pass in the name(s) of the package(s) containing the classes, provided each package contains either a jaxb.index file that lists the classes to use or an ObjectFactory class with methods for creating instances of the domain classes (and/or JAXBElements that wrap them). This approach might be preferable if you need XML mappings for a large number of unrelated domain classes. The JAXBContext has methods for creating marshallers (which create XML documents to represent objects) and unmarshallers (which instantiate objects and initialize them from the data in XML documents). We can check out the default mapping for our Employee class like this: Employee employee = new Employee(); employee.setEmployeeId(37); employee.setFirstName(“Dave”); employee.setLastName(“Lister”);Marshaller marshaller = jaxb.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.marshal(employee, System.out); (The setProperty() call isn’t strictly necessary but makes the output much more human-readable.) If we try running this code, we’ll get an exception telling us that we haven’t identified a root element. To fix this we add the @XmlRootElement annotation to our Employee class. @XmlRootElement @Entity @Table(name=”EMPLOYEE”) public class Employee { @Id @Column(name=”EMPLOYEE_ID”) private Integer employeeId; @Column(name=”FIRST_NAME”) private String firstName; @Column(name=”LAST_NAME”) private String lastName; // … getters and setters … }; By default, the marshaller will map every public bean property (getter/setter pair) and every public field; so if our Employee class has the getters and setters you’d expect, then our output should look something like this: <?xml version=”1.0” encoding=”UTF-8” standalone=”yes”?> <employee> <employeeId>37</employeeId> <firstName>Dave</firstName> <lastName>Lister</lastName> </employee>Note that the elements under will be in an arbitrary order. (In my tests it’s been alphabetical.) In this case that works out nicely, but if it didn’t we could force the order using the @XmlType annotation. The unmarshaller will, by default, take the elements in any order. JAXB is happily ignorant of the JPA annotations, and Hibernate (or whatever JPA provider you might use) will disregard the JAXB annotations, so we can now load data from XML files into our database by simply asking JAXB to unmarshal the data from the files and passing the resulting objects to the JPA provider. The unmarshalling code would look like this: JAXBContext jaxb = JAXBContext.newInstance(Employee.class); Unmarshaller unmarshaller = jaxb.createUnmarshaller(); File xmlFile = /* … */; Employee employee = unmarshaller.unmarshal(xmlFile);By default if an element that represents one of the bean properties is omitted from the XML, that property simply isn’t set; so for example if our JPA mapping includes automatic generation of employeeId, then the <employee> element need only contain <firstName> and <lastName>. The Good… In theory, that’s about it. (Extra credit if you know the difference between theory and practice.) A couple annotations and maybe a dozen lines of code are enough to get you started. As an added benefit, you can see the relationships between all of your data’s representations (XML, database, and Java object) in a single annotated .java file. The Not So Good… The above example is simple and may cover a fair number of basic use cases; but most real data models include things like one-to-many relationships and composite keys, which add wrinkles you may or may not foresee. In Part 2 (slated for August 25, 2014) I will address some of the complications I have encountered and discuss reasonably simple options for addressing each of them.Reference: JAXB – A Newcomer’s Perspective, Part 1 from our JCG partner Mark Adelsberger at the Keyhole Software blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close