Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Keeping things DRY: Method overloading

A good clean application design requires discipline in keeping things DRY: Everything has to be done once. Having to do it twice is a coincidence. Having to do it three times is a pattern. — An unknown wise manNow, if you’re following the Xtreme Programming rules, you know what needs to be done, when you encounter a pattern: refactor mercilesslyBecause we all know what happens when you don’t:  Not DRY: Method overloading One of the least DRY things you can do that is still acceptable is method overloading – in those languages that allow it (unlike Ceylon, JavaScript). Being an internal domain-specific language, the jOOQ API makes heavy use of overloading. Consider the type Field (modelling a database column): public interface Field<T> {// [...]Condition eq(T value); Condition eq(Field<T> field); Condition eq(Select<? extends Record1<T>> query); Condition eq(QuantifiedSelect<? extends Record1<T>> query);Condition in(Collection<?> values); Condition in(T... values); Condition in(Field<?>... values); Condition in(Select<? extends Record1<T>> query);// [...]} So, in certain cases, non-DRY-ness is inevitable, also to a given extent in the implementation of the above API. The key rule of thumb here, however, is to always have as few implementations as possible also for overloaded methods. Try calling one method from another. For instance these two methods are very similar: Condition eq(T value); Condition eq(Field<T> field); The first method is a special case of the second one, where jOOQ users do not want to explicitly declare a bind variable. It is literally implemented as such: @Override public final Condition eq(T value) { return equal(value); }@Override public final Condition equal(T value) { return equal(Utils.field(value, this)); }@Override public final Condition equal(Field<T> field) { return compare(EQUALS, nullSafe(field)); }@Override public final Condition compare(Comparator comparator, Field<T> field) { switch (comparator) { case IS_DISTINCT_FROM: case IS_NOT_DISTINCT_FROM: return new IsDistinctFrom<T>(this, nullSafe(field), comparator);default: return new CompareCondition(this, nullSafe(field), comparator); } } As you can see:eq() is just a synonym for the legacy equal() method equal(T) is a more specialised, convenience form of equal(Field<T>) equal(Field<T>) is a more specialised, convenience form of compare(Comparator, Field<T>) compare() finally provides access to the implementation of this APIAll of these methods are also part of the public API and can be called by the API consumer, directly, which is why the nullSafe() check is repeated in each method. Why all the trouble? The answer is simple.There is only very little possibility of a copy-paste error throughout all the API. … because the same API has to be offered for ne, gt, ge, lt, le No matter what part of the API happens to be integration-tested, the implementation itself is certainly covered by some test. This way, it is extremely easy to provide users with a very rich API with lots of convenience methods, as users do not want to remember how these more general-purpose methods (like compare()) really work.The last point is particularly important, and because of risks related to backwards-compatibility, not always followed by the JDK, for instance. In order to create a Java 8 Stream from an Iterator, you have to go through all this hassle, for instance: // Aagh, my fingers hurt... StreamSupport.stream(iterator.spliterator(), false); // ^^^^^^^^^^^^^ ^^^^^^^^^^^ ^^^^^ // | | | // Not Stream! | | // | | // Hmm, Spliterator. Sounds like | | // Iterator. But what is it? ---------+ | // | // What's this true and false? | // And do I need to care? ------------------------+ When, intuitively, you’d like to have: // Not Enterprise enough iterator.stream(); In other words, subtle Java 8 Streams implementation details will soon leak into a lot of client code, and many new utility functions will wrap these things again and again. See Brian Goetz’s explanation on Stack Overflow for details. On the flip side of delegating overload implementations, it is of course harder (i.e. more work) to implement such an API. This is particularly cumbersome if an API vendor also allows users to implement the API themselves (e.g. JDBC). Another issue is the length of stack traces generated by such implementations. But we’ve shown before on this blog that deep stack traces can be a sign of good quality. Now you know why. Takeaway The takeaway is simple. Whenever you encounter a pattern, refactor. Find the most common denominator, factor it out into an implementation, and see that this implementation is hardly ever used by delegating single responsibility steps from method to method. By following these rules, you will:Have less bugs Have a more convenient APIHappy refactoring!Reference: Keeping things DRY: Method overloading from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
javafx-logo

FX Playground

Introduction FX Playground is a JavaFX-based prototyping tool or live editor that eliminates the step of compiling Java code. This concept isn’t new, for instance the Web world there are many HTML5 playgrounds that offer online editors that enable developers to quickly prototype or experiment with various JavaScript libraries. This allows the developer to focus on visualizations or UI details without needing to set-up an IDE project or mess with files. Even older (pre-dating) than playgrounds are REPLs (Read Eval Print Loop) where dynamic languages such as Groovy, Python, Ruby, etc. provide an interactive interpreter command line tool to allow developers to quickly script code to be executed. Scala is a compiled language, but also provides a REPL tool. After finishing the book JavaFX 8 Introduction by Example I noticed each example was created as separate NetBeans projects which seemed a little overkill for small examples. Because the book is based on Java the language each program needed to be compiled (via javac) prior to execution. Larger projects will typically need to be set-up with a proper classpath and resources in the appropriate directory locations. Even larger projects will also need dependencies which typically reside on Maven repositories. JavaOne 2014 Based on timing I was able to submit a talk regarding JavaFX based playgrounds just in time. After awhile I was pleasantly surprised that my proposal (talk) was accepted. You can check out the session here. Also, I will be presenting with my good friend Gerrit Grunwald (@hansolo_). So, be prepared to see awe-inspiring demos. Since the talk is a BoF (birds of a feather) the atmosphere will be low-key and very casual. I hope to see you there! The JavaOne talk is titled “JavaFX Coding Playground (JavaFX-Based Live Editor Tool) [BOF2730]“.  Based on the description you’ll find that the tool will be using the NEW! Nashorn (JavaScript) engine to interact with JavaFX primitives. The figure below depicts the FX Playground tool’s editor windows and a JavaFX Display area. Starting clockwise at the lower left is the code editor window allowing the user to use JavaScript (Nashorn) to interact with nodes. Next, is the JavaFX FXML editor window allowing the user to use FXML (upper left). The FXML window is an optional.  In the upper right, you will notice the JavaFX CSS editor window allowing you to style nodes on the display surface. Lastly, to the bottom right is the output area or better known as the DISPLAY_SURFACE.  FX Playground in Action Because FX Playground is still in development I will give you a glimpse of some demos that I’ve created on Youtube. The following are examples with links to videos.FXPlayground3d - Nashorn and JavaFX 3D FX Playground now has a settings slide out panel - Nashorn, Rectangle w/CSS, and MediaView FX Playground Using Enzo library - Nashorn and Enzo Library FX Playground Testing Video w/ MediaView and WebView - Nashorn, MediaView and WebViewRoadmap There are plans to opensource the code, but for now there is much needed functionality before public consumption. The following features are a work in progress:Make use of FXML editor window. Pop out the display panel into its own window Save, SaveAs, and Load Playgrounds Build software to be an executable for tool users. (90% done) Make the tool capable of using other languages (JSR 223)I want to thank Oracle corp. especially the following engineers who helped me (some of the engineers below are not Oracle employees):David Grieve – @dsgrieve Jim Laskey – @wickund Sundararajan Athijegannathan  – @sundararajan_a Danno Ferrin – @shemnon Sean Phillips – @SeanMiPhillips Mark Heckler – @MkHeck Jose Perada – @JPeredaDnr Gerrit Grunwald – @hansolo_ Jim Weaver - @JavaFXpertResourcesCarlFX’s Channel - https://www.youtube.com/channel/UCNBYRHaYk9mlTmn9oAPp1VA 7 of the Best Code Playgrounds - http://www.sitepoint.com/7-code-playgrounds NetBeans – https://www.netbeans.org JavaFX 8 Introduction by Example - http://www.apress.com/9781430264606 Nashorn - https://wiki.openjdk.java.net/display/Nashorn/Main Enzo - https://bitbucket.org/hansolo/enzo/wiki/Home Harmonic Code - http://harmoniccode.blogspot.com/Reference: FX Playground from our JCG partner Carl Dea at the Carl’s FX Blog blog....
career-logo

How to Evaluate Job Offers

At some point in a career, many will be in a position to decide between multiple job offers from different companies – or at worst having to decide between accepting a new job or staying put. When starting to compare offers, it is common for the recipient to focus on the known quantities (i.e. salary, bonus, etc.) and perhaps a couple additional details that are generally considered more subjective (work environment, technologies). In order to make a truly wise choice it is also useful to include less obvious factors as well as future considerations, as those generally will have a much stronger influence on career earnings and success. These are harder to predict, but must enter into your decision unless your sole objective is to meet some immediate short-term need.The easy part The most common components factoring into gross compensation are: Cash compensation (salary, bonus, sign-on) – If the bonus is listed as guaranteed, the figure can be lumped into salary. Most bonuses are not guaranteed, but rather are tied to personal and/or company goals being met. Some firms or individual employees are willing to provide data on bonus history. Sign-ons are used to sweeten an offer or to rectify a potential cost the new hire would incur by leaving their job, such as an unpaid bonus. Healthcare premiums and contributions – Offer letters typically do not list employee out-of-pocket insurance cost, and personal circumstances may weigh heavily on how one values health insurance. Employer contribution can vary from 50-100% while other companies offer employee-only contribution (no contribution towards spouse/partner/child), which can result in a total compensation difference of a few percent. 401k or retirement plans – Employee match and contribution to these plans can be significant. Consider both the dollar amounts and the vesting schedules. Education reimbursement – If considering a return to school this policy could make a difference. Paid time off – Although the real value any employee places on time off will vary, the dollar value of each day of PTO can be estimated using a formula. (Annual salary / estimated annual work hours) x work hours in a day Many candidates make the mistake of basing their decisions with too much weight placed on base salary. This may be attributed to our emotional attachment to numbers and compensation “milestones” (usually round numbers), the perception of status that results from salary, and the inability of candidates to accurately gather and calculate the details of a comprehensive package. A friend might tell you about her 100K salary, but how often do you hear someone independently offer up that they pay 10K per year for health insurance and only get one week of vacation? Additional considerations The details above are all easily obtained, quantified, and require no interpretation. Everything from this point on will require a bit of investigation as well as some educated guessing. Expected hours – To put a value on time for offer comparison, a quick calculation to convert salary into dollars per hour can be a telling figure. All else being equal, that 80K offer with a 40 hour work week is more per hour than the 100K offer at 55 hours. Estimates of work hours may not be accurate, so multiple data sources can help. Commute time/cost and possibility for remote work – Distance may not be a reliable predictor of commute time or cost, and mundane details such as gas efficiency will quickly add up when you consider the trip is repeated 400+ times a year. Mass transit inefficiencies and delays have a cost to commuters as well. The ability to work remotely, even for one or two days a week, makes some difference. Travel – This can be viewed as a positive or a negative depending on the worker. Consider any hidden expenses that may not be reimbursed, such as child or pet care costs. Perks – Company provided phone or internet, gym membership, and office meals/snacks are not things job seekers expect, yet could provide thousands of dollars in value. Self-improvement budget – Some companies may be willing to foot the bill for training or conferences that the employee would have paid for anyway. Forecasting and speculation The most vital characteristics contributing to a job’s long-term value are often hidden and unsupported by reliable data. Establishing the present day value of any one job is somewhat complex, and trying to forecast future values requires speculation. Future marketability – This is a key factor in career compensation, yet is often overlooked when the temptation of short-term gains are presented. The consideration of future marketability is most critical for new grads or junior level employees, who are (unfortunately) often in debt and easily influenced by short-term gains and cash compensation. What skills will be obtained in a job, and to what extent will these new skills increase market value? Will having a company’s name on a résumé (whether by associated prestige or number of direct competitors) create some additional demand for services? If a goal is to maximize lifetime earnings, one could theorize that a year of unpaid work at a place like Google or Facebook is preferable to two years of paid work at many other companies. Promotions and raises – Job offers only include starting salary/title. How, and how often, does a company evaluate employees for salary increases, and what amounts might be expected for performers? Do they tend to promote from within or hire from outside? Is there a career path and is there a point where compensation plateaus? Stress and satisfaction – It’s impossible to place a hard value on work stress or job satisfaction, and the amount of either is difficult to predict. Satisfaction, work/life balance, and stress can impact both health and productivity, which could also contribute to marketability. Stock/stock options  – The number of factors that influence the potential value is too long to list. Vesting schedules may have substantial impact on perceived value if a long tenure isn’t expected. Environment, team, management – Companies try to make a strong positive impression during interviews, but that image doesn’t always accurately reflect day-to-day operations. Younger workers should place considerable weight on whether there are team members to learn from and mentors who are both available and willing to guide. Employees with long tenures will have insight, though the opinion of more recent hires may be more relevant to anyone considering an offer. Conclusions Job change decisions are complex, and tough choices usually end up coming from the gut. The immediate results of a choice are easily identified and quantified, but the more important long-term ramifications require research, interpretation, and a bit of conjecture. When combining all of the smaller elements of a compensation package, the highest salary will not always be the most lucrative offer.Reference: How to Evaluate Job Offers from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
java-logo

Building extremely large in-memory InputStream for testing purposes

For some reason I needed extremely large, possibly even infinite InputStream that would simply return the samebyte[] over and over. This way I could produce insanely big stream of data by repeating small sample. Sort of similar functionality can be found in Guava: Iterable<T> Iterables.cycle(Iterable<T>) and Iterator<T> Iterators.cycle(Iterator<T>). For example if you need an infinite source of 0 and 1, simply sayIterables.cycle(0, 1) and get 0, 1, 0, 1, 0, 1... infinitely. Unfortunately I haven’t found such utility forInputStream, so I jumped into writing my own. This article documents many mistakes I made during that process, mostly due to overcomplicating and overengineering straightforward solution.   We don’t really need an infinite InputStream, being able to create very large one (say, 32 GiB) is enough. So we are after the following method: public static InputStream repeat(byte[] sample, int times)It basically takes sample array of bytes and returns an InputStream returning these bytes. However when sampleruns out, it rolls over, returning the same bytes again – this process is repeated given number of times, untilInputStream signals end. One solution that I haven’t really tried but which seems most obvious: public static InputStream repeat(byte[] sample, int times) { final byte[] allBytes = new byte[sample.length * times]; for (int i = 0; i < times; i++) { System.arraycopy(sample, 0, allBytes, i * sample.length, sample.length); } return new ByteArrayInputStream(allBytes); }I see you laughing there! If sample is 100 bytes and we need 32 GiB of input repeating these 100 bytes, generated InputStream shouldn’t really allocate 32 GiB of memory, we must be more clever here. As a matter of fact repeat()above has another subtle bug. Arrays in Java are limited to 231-1 entries (int), 32 GiB is way above that. The reason this program compiles is a silent integer overflow here: sample.length * times. This multiplication doesn’t fit in int. OK, let’s try something that at least theoretically can work. My first idea was as follows: what if I create manyByteArrayInputStreams sharing the same byte[] sample (they don’t do an eager copy) and somehow join them together? Thus I needed some InputStream adapter that could take arbitrary number of underlying InputStreams and chain them together – when first stream is exhausted, switch to next one. This awkward moment when you look for something in Apache Commons or Guava and apparently it was in the JDK forever… java.io.SequenceInputStreamis almost ideal. However it can only chain precisely two underlying InputStreams. Of course sinceSequenceInputStream is an InputStream itself, we can use it recursively as an argument to outerSequenceInputStream. Repeating this process we can chain arbitrary number of ByteArrayInputStreams together: public static InputStream repeat(byte[] sample, int times) { if (times <= 1) { return new ByteArrayInputStream(sample); } else { return new SequenceInputStream( new ByteArrayInputStream(sample), repeat(sample, times - 1) ); } }If times is 1, just wrap sample in ByteArrayInputStream. Otherwise use SequenceInputStream recursively. I think you can immediately spot what’s wrong with this code: too deep recursion. Nesting level is the same as timesargument, which will reach millions or even billions. There must be a better way. Luckily minor improvement changes recursion depth from O(n) to O(logn): public static InputStream repeat(byte[] sample, int times) { if (times <= 1) { return new ByteArrayInputStream(sample); } else { return new SequenceInputStream( repeat(sample, times / 2), repeat(sample, times - times / 2) ); } }Honestly this was the first implementation I tried. It’s a simple application of divide and conquer principle, where we produce result by evenly splitting it into two smaller sub-problems. Looks clever, but there is one issue: it’s easy to prove we create t (t =times) ByteArrayInputStreams and O(t) SequenceInputStreams. While sample byte array is shared, millions of various InputStream instances are wasting memory. This leads us to alternative implementation, creating just one InputStream, regardless value of times: import com.google.common.collect.Iterators; import org.apache.commons.lang3.ArrayUtils;public static InputStream repeat(byte[] sample, int times) { final Byte[] objArray = ArrayUtils.toObject(sample); final Iterator<Byte> infinite = Iterators.cycle(objArray); final Iterator<Byte> limited = Iterators.limit(infinite, sample.length * times); return new InputStream() { @Override public int read() throws IOException { return limited.hasNext() ? limited.next() & 0xFF : -1; } }; }We will use Iterators.cycle() after all. But before we have to translate byte[] into Byte[] since iterators can only work with objets, not primitives. There is no idiomatic way to turn array of primitives to array of boxed types, so I use ArrayUtils.toObject(byte[]) from Apache Commons Lang. Having an array of objects we can create aninfinite iterator that cycles through values of sample. Since we don’t want an infinite stream, we cut off infinite iterator using Iterators.limit(Iterator<T>, int), again from Guava. Now we just have to bridge fromIterator<Byte> to InputStream - after all semantically they represent the same thing. This solution suffers two problems. First of all it produces tons of garbage due to unboxing. Garbage collection is not that much concerned about dead, short-living objects, but still seems wasteful. Second issue we already faced previously: sample.length * times multiplication can cause integer overflow. It can’t be fixed becauseIterators.limit() takes int, not long - for no good reason. BTW we avoided third problem by doing bitwise andwith 0xFF - otherwise byte with value -1 would signal end of stream, which is not the case. x & 0xFF is correctly translated to unsigned 255 (int). So even though implementation above is short and sweet, declarative rather than imperative, it’s too slow and limited. If you have a C background, I can imagine how uncomfortable you were seeing me struggle. After all the most straightforward, painfully simple and low-level implementation was the one I came up with last: public static InputStream repeat(byte[] sample, int times) { return new InputStream() { private long pos = 0; private final long total = (long)sample.length * times;public int read() throws IOException { return pos < total ? sample[(int)(pos++ % sample.length)] : -1; } }; }GC free, pure JDK, fast and simple to understand. Let this be a lesson for you: start with the simplest solution that jumps to your mind, don’t overengineer and don’t be too smart. My previous solutions, declarative, functional, immutable, etc. – maybe they looked clever, but they were neither fast nor easy to understand. The utility we just developed was not just a toy project, it will be used later in subsequent article.Reference: Building extremely large in-memory InputStream for testing purposes from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
software-development-2-logo

Test Attribute #4 – Accuracy

This is the 4th post on test attributes that were described in the now even more famous “How to test your tests” post. If you want training and/or coaching on testing, contact me. Accuracy is about pinpointing the location of the failing code. If we know where the offending code is, we can easily analyze what problem we caused, and move on to fixing the problem. The trivial example is tests that check different methods. Of course, if one of them fails, we know where to look. Here’s another simple case, on the same method. We have a PositiveCalculator class that its Add method adds two positive numbers,  or throws an exception if they are not so positive: public int Add(int a, int b) { if ((a < 0) || (b < 0)) throw new ArgumentException();return a + b; } We can then write the following tests: [Test]public void AddTwoPositiveNumbers_GetResult() { PositiveCalculator calculator = new PositiveCalculator(); Assert.That(calculator.Add(2, 2), Is.EqualTo(4)); }[Test]public void AddTwoNegativeNumbers_Exception() { PositiveCalculator calculator = new PositiveCalculator(); Assert.Throws<ArgumentException> (() => calculator.Add(-5, -5)); } Looking at the tests, we already see they check two different behaviors. When we combine what we read from the tests, and the tested code, it’s easy to relate the parts of the code to each tests. So if one of them fails, we’ll know where to look. Unfortunately code doesn’t always look like this.  It usually starts like that, but then grows to monster-size functions. When it does, it either becomes untestable, or incurs tests that are large, overlap each other, and test multiple things. None of those are accurate tests. So what can we do? Let’s start with the preventive measure: Don’t let the code grow. Be merciless about keeping methods small, and use The Single Responsibility Principle to extract code into smaller, easy testable and accurate functions. But I didn’t write this code! How do I make my tests accurate? Here’s what you can do. Now that you have a test, or a bunch of them, it’s time to make use of them: Start refactoring the code. Having the tests in place, will tell you if you’re breaking stuff, and it’s very easy going back to working mode, because refactoring is also done in small steps. Once you have broken the code into smaller pieces, you can now write smaller tests, which give  you the accuracy that the bigger tests didn’t have. In fact, you might want to replace the big tests with some smaller ones, if they give better information and performance for the same coverage. We can also make the tests more accurate with the following methods:One Assert per test – When you check only one thing, chances are that your test is more accurate than when checking multiple things. If you have more Asserts in your tests, break them into multiple tests. Test shorter scenarios – In legacy code, it’s tempting to test large scenarios, because the code does a lot, and does not expose entry points to single operations. Try to test shorter scenarios rather than long ones, and smaller objects rather than large one. Try to break long scenarios into short ones. If you use the big tests to refactor the code, you can then write smaller, more accurate tests. Mock unrelated stuff- If you have dependencies that do multiple things, and therefore make longer scenarios, mock them. You’ll make the test more accurate because it now runs through the relevant code you’re interested in. Check the coverage – Visually if possible. IDEs and tools that show visual coverage on the code are awesome, because they add another visual clue to where the impacted code is. On trivial code they don’t matter much, but on complex code, you can compare paths of different tests, and by applying some elimination, you can find out where the problems are. You can also use the visual paths as feedback to how accurate your tests are, and if they aren’t, make them more so.Accuracy helps us fix problems quickly. But it’s definitely not so easy to come by, because it depends very much on the tested code. However, using the combination of the methods I suggested, and making use of working to test to refactor and simplifications, test accuracy is definitely within reach.Reference: Test Attribute #4 – Accuracy from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
java-logo

JAXB – A Newcomer’s Perspective, Part 1

I know what a lot of you are already thinking, so let’s get this out of the way: “JAXB? As in XML? Come on, all the cool kids are using JSON.” The “XML vs. JSON” debate and the many arguments that contribute to it are pretty well documented; I won’t spend a lot of time rehashing them here. I believe that each format has its uses, but even if you’re in the “no XML ever” camp you still might want to read on, as the observations and techniques I discuss should be equally applicable to JSON data binding with Jackson (or similar tools). In Part 1 I describe a simple usage pattern that pairs JAXB’s data binding capabilities with JPA. Of course the interactions between the two aren’t always so simple, so in Part 2 I’ll look at how to address a few of the complications you can expect to encounter. The Problem On my current project, we’re building a suite of Java applications to manage the staging of materials in a manufacturing process. We decided to build “from the outside in” to facilitate user-facing demos after any given iteration. So in the first iteration we built some of the screens with hard-coded dummy data; then with each successive iteration we added more infrastructure and logic behind the screens. To make early demos more interactive, we decided to create a “test console” for the central app. A person typing commands at the console can simulate the behavior of the “net yet implemented” parts of the system. The cost to build the console is modest thanks to tools like Antlr 4 that make command parsing simple, and we see long-term value in using the console for testing and diagnostics. We reached a point where the system’s behavior needed to be driven by data from another app. The “other app” that’s responsible for creating and maintaining this data hasn’t been written and won’t be for some time, so we needed a way to load sample data through the console. Options Essentially our task was to build (or leverage) a data loader. We settled on XML as a likely format for the file, and then rifled through the list of tools with which our team would generally be familiar. DBUnit has data-loading capabilities (intended for setting up repeatable test conditions). It supports two different XML Schemas (“flat” and “full”), each of which is clearly table-oriented. It also provides for substitution variables, so we could build template files and allow the console input to set final values. I harbor some reservations about using a unit testing tool in this way, but of the arrows in the team’s quiver it could be the closest fit. For better or worse, my first attempt to apply it was not successful (turns out I was looking at the wrong part of the DBUnit API) which got me thinking a little further outside the box. We already had a way – namely Hibernate – to push data into our database; so when I phrased the problem in terms of “how to create entity instances from XML documents,” JAXB emerged as an obvious contender. I was pleased to discover that Java ships with a JAXB implementation, so I set to work trying it out. A Newcomer’s Perspective Never having used JAXB, I started with a little research. Much of the material I found dealt with generating Java classes from an XML schema. This isn’t surprising – it’s a big part of what the tool can do – but in my case, I wanted to bind data to my existing Hibernate-mapped domain classes. And that leads to something that may be a bit more surprising: some of the most comprehensive tutorials I found didn’t seem to anticipate this usage. I think this is a good demonstration of the way that your starting assumptions about a tool can shape how you think about it and how you use it. If you start by comparing JAXB with DOM, as several online resources do, then it may be natural to think of the output of an unmarshalling operation as a document tree that needs to be traversed and processed, perhaps copying relevant data to a parallel hierarchy of domain objects. The traversal and processing may be easier (at least conceptually) than it would with a DOM tree, but as a tradeoff you have to keep the two class hierarchies straight, which calls for careful naming conventions. There are no doubt use cases where that is exactly what is necessary, but the tool is not limited to only that approach. If you instead start by comparing JAXB with Hibernate – as a means of loading data from an external source into your domain objects – then it is natural to ask “why can’t I use one set of domain objects for both?” At least some of the time, with a little caution, you can. The Simple Case In these examples I’ll use the JAXB API directly. We only need to make a few simple calls to accomplish our task, so this is reasonably straightforward. It is worth noting that Spring does offer JAXB integration as well, and especially if you use Spring throughout your app, the configuration approach it offers may be preferable. Suppose you have an EMPLOYEE table. Every employee has a unique numeric ID and a name. If you use annotations for your ORM mapping data, you might have a domain class like this: @Entity @Table(name=”EMPLOYEE”) public class Employee { @Id @Column(name=”EMPLOYEE_ID”) private Integer employeeId; @Column(name=”FIRST_NAME”) private String firstName; @Column(name=”LAST_NAME”) private String lastName; // … getters and setters … }; Now we want to let the user provide an Employee.xml data file. Supposing we don’t have a specific XML Schema with which we need to comply, we might as well see what JAXB’s default handling of the class would be. So, we’ll start with the minimal steps to “marshal” an Employee instance into an XML document. If we’re happy with how the resulting document looks, we’ll swap in the unmarshalling code; if not, we can look into customizing the mapping. First we need a JAXBContext instance configured to work with our domain class(es). JAXBContext jaxb = JAXBContext.newInstance(Employee.class); As an aside, instead of passing the class object(s) to newInstance(), we could pass in the name(s) of the package(s) containing the classes, provided each package contains either a jaxb.index file that lists the classes to use or an ObjectFactory class with methods for creating instances of the domain classes (and/or JAXBElements that wrap them). This approach might be preferable if you need XML mappings for a large number of unrelated domain classes. The JAXBContext has methods for creating marshallers (which create XML documents to represent objects) and unmarshallers (which instantiate objects and initialize them from the data in XML documents). We can check out the default mapping for our Employee class like this: Employee employee = new Employee(); employee.setEmployeeId(37); employee.setFirstName(“Dave”); employee.setLastName(“Lister”);Marshaller marshaller = jaxb.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.marshal(employee, System.out); (The setProperty() call isn’t strictly necessary but makes the output much more human-readable.) If we try running this code, we’ll get an exception telling us that we haven’t identified a root element. To fix this we add the @XmlRootElement annotation to our Employee class. @XmlRootElement @Entity @Table(name=”EMPLOYEE”) public class Employee { @Id @Column(name=”EMPLOYEE_ID”) private Integer employeeId; @Column(name=”FIRST_NAME”) private String firstName; @Column(name=”LAST_NAME”) private String lastName; // … getters and setters … }; By default, the marshaller will map every public bean property (getter/setter pair) and every public field; so if our Employee class has the getters and setters you’d expect, then our output should look something like this: <?xml version=”1.0” encoding=”UTF-8” standalone=”yes”?> <employee> <employeeId>37</employeeId> <firstName>Dave</firstName> <lastName>Lister</lastName> </employee>Note that the elements under will be in an arbitrary order. (In my tests it’s been alphabetical.) In this case that works out nicely, but if it didn’t we could force the order using the @XmlType annotation. The unmarshaller will, by default, take the elements in any order. JAXB is happily ignorant of the JPA annotations, and Hibernate (or whatever JPA provider you might use) will disregard the JAXB annotations, so we can now load data from XML files into our database by simply asking JAXB to unmarshal the data from the files and passing the resulting objects to the JPA provider. The unmarshalling code would look like this: JAXBContext jaxb = JAXBContext.newInstance(Employee.class); Unmarshaller unmarshaller = jaxb.createUnmarshaller(); File xmlFile = /* … */; Employee employee = unmarshaller.unmarshal(xmlFile);By default if an element that represents one of the bean properties is omitted from the XML, that property simply isn’t set; so for example if our JPA mapping includes automatic generation of employeeId, then the <employee> element need only contain <firstName> and <lastName>. The Good… In theory, that’s about it. (Extra credit if you know the difference between theory and practice.) A couple annotations and maybe a dozen lines of code are enough to get you started. As an added benefit, you can see the relationships between all of your data’s representations (XML, database, and Java object) in a single annotated .java file. The Not So Good… The above example is simple and may cover a fair number of basic use cases; but most real data models include things like one-to-many relationships and composite keys, which add wrinkles you may or may not foresee. In Part 2 (slated for August 25, 2014) I will address some of the complications I have encountered and discuss reasonably simple options for addressing each of them.Reference: JAXB – A Newcomer’s Perspective, Part 1 from our JCG partner Mark Adelsberger at the Keyhole Software blog....
javafx-logo

JavaFX Tip 12: Define Icons in CSS

When you are a UI developer coming from Swing like me then there is a good chance that you are still setting images / icons directly in your code. Most likely something like this:                 import javafx.scene.control.Label; import javafx.scene.image.ImageView;public class MyLabel extends Label {public MyLabel() { setGraphic(new ImageView(MyLabel.class. getResource("image.gif").toExternalForm())); } }  In this example the image file is looked up via Class.getResource(), the URL is passed to the constructor of the ImageView node and this node is set as the “graphic” property on the label. This approach works perfectly well but with JavaFX there is a more elegant way. You can put the image definition into a CSS file, making it easy for you and / or others to replace it (the marketing department has decided to change the corporate identity once again). The same result as above can be achieved this way: import javafx.scene.control.Label;public class CSSLabel extends Label {public CSSLabel() { getStyleClass().add("folder-icon"); } } Now you obviously need a CSS file as well: .folder-icon { -fx-graphic: url("image.gif"); } And in your application you need to add the stylesheet to your scene graph. Here we are adding it to the scene. import javafx.application.Application; import javafx.geometry.Pos; import javafx.scene.Scene; import javafx.stage.Stage;public class MyApplication extends Application {public void start(Stage primaryStage) throws Exception { CSSLabel label = new CSSLabel(); label.setText("Folder"); label.setAlignment(Pos.CENTER); Scene scene = new Scene(label); scene.getStylesheets().add(MyApplication.class. getResource("test.css").toExternalForm()); primaryStage.setScene(scene); primaryStage.setTitle("Image Example"); primaryStage.setWidth(250); primaryStage.setHeight(100); primaryStage.show(); }public static void main(String[] args) { launch(args); } } With this approach you have a clean separation of your controls and their apperance and you allow for easy customization as well.Reference: JavaFX Tip 12: Define Icons in CSS from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
neo4j-logo

Integrate apps with Neo4j using Zapier

Recently, I was directed to Zapier to get some lightweight integration done between systems for a quick proof of concept. Initially skeptical, I found that it really could save time and tie together all those pieces of your system you never got around to integrating. Moreover, it is a way for people to integrate the applications they use without having to code or pay a developer to do it for you. Going through the Zapbook, I found MongoDB, MySQL, Postgresql, SQL Server and gasp! no Neo4j. Sad.   I already had a potential use case which was to collect data via a form and get it into Neo4j ASAP i.e. no coding. Google Forms is available on Zapier, so I went about making Neo4j available as well. I’ve now got a first version zap ready for Neo4j which allows one to collect data triggered by another zap, and save it to Neo4j via a Cypher statement. Here’s what it looks like. Using the Google Forms example, I’ve set up a form to capture feedback about a product and I want to push this data into Neo4j every time the form is submitted. Step 1: Log into Zapier, click on Make a Zap! Step 2: The triggering app is Google Docs, where we want to save data to Neo4j every time a form is filled i.e. the spreadsheet backing the form has a new row inserted. The Neo4j zap currently supports only one action- Update the graph.Step 3: Follow the instructions to make sure Zapier can access your Google Docs account Step 4: Set up a Neo4j account. Call it whatever you like, supply the username, password and URL. Note that in this version, the assumption is that your Neo4j database is not left open to the world. I used the Authentication extension to set mine up.  Click on Continue and make sure Zapier confirms that it can indeed access your Neo4j database  Step 5: Select your spreadsheet and the Worksheet that contains the data. Here’s what my spreadsheet looks like-Step 6: Write a Cypher query to convert that row into nodes and relationships. You must write a parameterized Cypher query in the Cypher Query field. The Cypher Parameters must contain a comma separated list of the parameter names used in the query and the field selected from the triggering app (use the Insert Fields button).  Step 7: See what the trigger and action samples look like- then test it out and celebrate when it says Success!  I checked what my database looked like at this point and sure enough:That’s all there is to it. Zapier will poll the triggering app every 15 minutes so by the time all your forms are filled, you have a Neo4j database filled with data! I tried out the MongoDB->Neo4j and Trello->Neo4j integration and they worked well. Whether you need a quick and dirty integration with Neo4j, or you want to collect data from other applications into Neo4j for later analysis,  or you’re building a serious application, Zapier could be of use. If you’d like to try it out, send @luannem a message and I’ll send you a beta invite. And if you think this is useful, I’d be happy to hear about it and add more features to the Neo4j zap!Reference: Integrate apps with Neo4j using Zapier from our JCG partner Luanne Misquitta at the Thought Bytes blog....
software-development-2-logo

9 Differences between TCP and UDP Protocol – Java Network Interview Question

TCP and UDP are two transport layer protocols, which are extensively used in internet for transmitting data between one host to another. Good knowledge of how TCP and UDP works is essential for any programmer. That’s why differences between TCP and UDP is a popular Java programming interview question. I have seen this question many times on various Java interviews , especially for server side Java developer positions. Since FIX (Financial Information Exchange) protocol is also a TCP based protocol, several investment banks, hedge funds, and exchange solution providers look for Java developers with good knowledge of TCP and UDP. Writing FIX engines and server side components for high speed electronic trading platforms requires capable developers with a solid understanding of the fundamentals including data structure, algorithms and networking. By the way, use of TCP and UDP is not limited to one area, it’s at the heart of internet. The protocol which is core of internet, HTTP is based on TCP. One more reason, why a Java developer should understand these two protocols in detail is that Java is extensively used to write multi-threaded, concurrent and scalable servers. Java also provides a rich Socket programming API for both TCP and UDP based communication. In this article, we will learn the key differences between the TCP and UDP protocols. To start with, TCP stands for Transmission Control Protocol and UDP stands for User Datagram Protocol, and both are used extensively to build Internet applications. Differences between TCP vs UDP Protocol I love to compare two things on different points, this not only makes them easy to compare but also makes it easy to remember differences. When we compare TCP to UDP, we learn difference in how both TCP and UDP works, we learn which one provides reliable and guaranteed delivery and which doesn’t. Which protocol is fast and why, and most importantly when to choose TCP over UDP while building your own distributed application. In this article we will see difference between UDP and TCP in 9 points, e.g. connection set-up, reliability, ordering, speed, overhead, header size, congestion control, application, different protocols based upon TCP and UDP and how they transfer data.Connection oriented vs Connection lessThe first and foremost difference between them is that TCP is a connection oriented protocol while UDP is a connection-less protocol. This means  a connection is established between client and server, before they can send data over TCP. Connection establishment process is also known as TCP hand shaking where control messages are interchanged between client and server. The image here describes the process of a TCP handshake, where control messages are exchanged between client and server. Client, which is the initiator of TCP connection, sends a SYN message to the server, which is listening on a TCP port. Server receives and sends a SYN-ACK message, which is received by client again and responded using ACK. Once the server receive this ACK message,  the TCP connection is established and ready for data transmission. On the other hand, UDP is a connection less protocol, and point to point connection is not established before sending messages. That’s the reason why, UDP is more suitable for multicast distribution of message, one to many distribution of data in single transmission.ReliabilityTCP provides delivery guarantee, which means a message sent using TCP protocol is guaranteed to be delivered to the client. If a message is lost in transit then it is recovered using resending, which is handled by the TCP protocol itself. On the other hand, UDP is unreliable, it doesn’t provide any delivery guarantee. A datagram package may be lost in transit. That’s why UDP is not suitable for programs which require guaranteed delivery.OrderingApart from delivery guarantee, TCP also guarantees order of message. The messages will be delivered to the client in the same order that the server has sent them, though its possible they may reach out of order to the other end of the network. TCP protocol will do all the sequencing and ordering for you. UDP doesn’t provide any ordering or sequencing guarantee. Datagram packets may arrive in any order. That’s why TCP is suitable for application which need delivery in sequenced manner, though there are UDP based protocols as well which provide ordering and reliability by using sequence number and redelivery e.g. TIBCO Rendezvous, which is actually a UDP based application.Data BoundaryTCP does not preserve data boundary, UDP does. In Transmission control protocol, data is sent as a byte stream, and no distinguishing indications are transmitted to signal message (segment) boundaries. On UDP, Packets are sent individually and are checked for integrity only if they arrived. Packets have definite boundaries which are honoured upon receipt, meaning a read operation at the receiver socket will yield an entire message as it was originally sent. Though TCP will also deliver complete message after assembling all bytes. Messages are stored on TCP buffers before sending to make optimum use of network bandwidth.SpeedIn one word, TCP is slow and UDP is fast. Since TCP does has to create connection, ensure guaranteed and ordered delivery, it does lot more than UDP. This costs TCP in terms of speed, that’s why UDP is more suitable where speed is a concern, for example online video streaming, telecast or online multi player games.Heavy weight vs Light weightBecause of the overhead mentioned above, Transmission control protocol is considered as heavy weight as compared to light weight UDP protocol. The simple mantra of UDP is to deliver messages without bearing any overhead of creating a connection and guaranteeing delivery or order guarantee. This is also reflected in their header sizes, which is used to carry meta data.Header sizeTCP has a bigger header than UDP. Usual header size of a TCP packet is 20 bytes which is more than double of 8 bytes, header size of UDP datagram packet. TCP header contains Sequence Number, Ack number, Data offset, Reserved, Control bit, Window, Urgent Pointer, Options, Padding, Check Sum, Source port, and Destination port. While UDP header only contains Length, Source port, Destination port, and Check Sum. Here is how TCP and UDP header looks like : Congestion or Flow controlTCP does Flow Control. TCP requires three packets to set up a socket connection, before any user data can be sent. TCP handles reliability and congestion control. On the other hand, UDP does not have an option for flow control.Usage and applicationWhere does TCP and UDP are used in internet? After knowing key differences between TCP and UDP, we can easily conclude, which situation suits them. Since TCP provides delivery and sequencing guaranty, it is best suited for applications that require high reliability, and transmission time is relatively less critical. While UDP is more suitable for applications that need fast, efficient transmission, such as games. UDP’s stateless nature is also useful for servers that answer small queries from huge numbers of clients. In practice, TCP is used in the finance domain e.g. the FIX protocol is a TCP based protocol, while UDP is used heavily in gaming and entertainment sites.TCP and UDP based ProtocolsOne of the best example of TCP based higher end protocol is HTTP and HTTPS, which is everywhere on internet. In fact most of the common protocols you are familiar of e.g. Telnet, FTP and SMTP all are based over Transmission Control Protocol. UDP don’t have anything as popular as HTTP but it is also extensively used in protocol like DHCP and DNS. Some of the other protocols which are based on the User Datagram protocol are Simple Network Management Protocol (SNMP), TFTP, BOOTP and NFS (early versions). Always remember to mention that TCP is connection oriented, reliable, slow, provides guaranteed delivery and preservers order of messages, while UDP is connection less, unreliable, no ordering guarantee, but a fast protocol. TCP overhead is also much higher than UDP, as it transmits more meta data per packet than UDP. It’s worth mentioning that header size of Transmission control protocol is 20 bytes, compared to 8 bytes header of User Datagram protocol. Use TCP, if you can’t afford to lose any message, while UDP is better for high speed data transmission, where loss of single packet is acceptable e.g. video streaming or online multi player games. While working in TCP/UDP based application on Linux, it’s also good to remember basic networking commands e.g. telnet and netstat, they help tremendously to debug or troubleshoot any connection issue.Reference: 9 Difference between TCP and UDP Protocol – Java Network Interview Question from our JCG partner Javin Paul at the Javarevisited blog....
java-logo

Java Keystore Tutorial

                     Table Of Contents1. Introduction 2. SSL and how it works 3. Private Keys 4. Public Certificates 5. Root Certificates 6. Certificate Authorities 7. Certificate Chain 8. Keystore using Java keytool 9. Keystore Commands 10. Configure SSL using Keystores and Self Signed Certificates on Apache Tomcat  1. Introduction Who of us didn’t visit ebay, amazon to buy anything or his personal bank account to check it. Do you think that those sites are secure enough to put your personal data like (credit card number or bank account number, etc.,)? Most of those sites use the Socket Layer (SSL) protocol to secure their Internet applications. SSL allows the data from a client, such as a Web browser, to be encrypted prior to transmission so that someone trying to sniff the data is unable to decipher it. Many Java application servers and Web servers support the use of keystores for SSL configuration. If you’re building secure Java programs, learning to build a keystore is the first step. 2. SSL and how it works A HTTP-based SSL connection is always initiated by the client using a URL starting with https:// instead of with http://. At the beginning of an SSL session, an SSL handshake is performed. This handshake produces the cryptographic parameters of the session. A simplified overview of how the SSL handshake is processed is shown in the diagram below.This is in short how it works:A browser requests a secure page (usually https://). The web server sends its public key with its certificate. The browser checks that the certificate was issued by a trusted party (usually a trusted root CA), that the certificate is still valid and that the certificate is related to the site contacted. The browser then uses the public key, to encrypt a random symmetric encryption key and sends it to the server with the encrypted URL required as well as other encrypted http data. The web server decrypts the symmetric encryption key using its private key and uses the symmetric key to decrypt the URL and http data. The web server sends back the requested html document and http data encrypted with the symmetric key. The browser decrypts the http data and html document using the symmetric key and displays the information.The world of SSL has, essentially, three types of certificates: private keys, public keys (also called public certificates or site certificates), and root certificates. 3. Private Keys The private key contains the identity information of the server, along with a key value. It should keep this key safe and protected by password because it’s used to negotiate the hash during the handshake. It can be used by someone to decrypt the traffic and get your personal information. It like leaving your house key in the door lock. 4. Public Certificates The public certificate (public key) is the portion that is presented to a client, it likes your personal passport when you show in the Airport. The public certificate, tightly associated to the private key, is created from the private key using a Certificate Signing Request (CSR). After you create a private key, you create a CSR, which is sent to your Certificate Authority (CA). The CA returns a signed certificate, which has information about the server identity and about the CA. 5. Root Certificates Root CA Certificate is a CA Certificate which is simply a Self-signed Certificate. This certificate represents a entity which issues certificate and is known as Certificate Authority or the CA such as VeriSign, Thawte, etc. 6. Certificate Authorities Companies who will sign certificates for you such as VeriSign, Thawte, Commodo, GetTrust. Also, many companies and institutions act as their own CA, either by building a complete implementation from scratch, or by using an open source option, such as OpenSSL. 7. Certificate Chain When a server and client establish an SSL connection, a certificate is presented to the client; the client should determine whether to trust this certificate, a process called the certificate chain. The client examines the issuer of a certificate, searches its list of trusted root certificates, and compares the issuer on the presented certificate to the subjects of the trusted certificates. If a match is found, the connection proceeds. If not, the Web browsers may pop up a dialog box, warning you that it cannot trust the certificate and offering the option to trust the certificate. 8. Keystore using Java keytool Java Keytool is a key and certificate management utility. It allows users to manage their own public/private key pairs and certificates. Java Keytool stores the keys and certificates in what is called a keystore. It protects private keys with a password. Each certificate in a Java keystore is associated with a unique alias. When creating a Java keystore you will first create the .jks file that will initially only contain the private key, then generate a CSR. Then you will import the certificate to the keystore including any root certificates. 9. Keystore Commands Create Keystore, Keys and Certificate RequestsGenerate a Java keystore and key pair keytool -genkey -alias mydomain -keyalg RSA -keystore keystore.jks -storepass passwordGenerate a certificate signing request (CSR) for an existing Java keystore keytool -certreq -alias mydomain -keystore keystore.jks -storepass password -file mydomain.csr Generate a keystore and self-signed certificate keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass password -validity 360Import CertificatesImport a root or intermediate CA certificate to an existing Java keystore keytool -import -trustcacerts -alias root -file Thawte.crt -keystore keystore.jks -storepass passwordImport a signed primary certificate to an existing Java keystore keytool -import -trustcacerts -alias mydomain -file mydomain.crt -keystore keystore.jks -storepass passwordExport CertificatesExport a certificate from a keystore keytool -export -alias mydomain -file mydomain.crt -keystore keystore.jks -storepass passwordCheck/List/View CertificatesCheck a stand-alone certificate keytool -printcert -v -file mydomain.crtCheck which certificates are in a Java keystore keytool -list -v -keystore keystore.jks -storepass passwordCheck a particular keystore entry using an alias keytool -list -v -keystore keystore.jks -storepass password -alias mydomainDelete CertificatesDelete a certificate from a Java Keytool keystore keytool -delete -alias mydomain -keystore keystore.jks -storepass passwordChange PasswordsChange a Java keystore password keytool -storepasswd -new new_storepass -keystore keystore.jks -storepass passwordChange a private key password keytool -keypasswd -alias client -keypass old_password -new new_password -keystore client.jks -storepass password10. Configure SSL using Keystores and Self Signed Certificates on Apache TomcatGenerate new keystore and self-signed certificateusing this command, you will prompt to enter specific information such as user name, organization unit, company and location. keytool -genkey -alias tomcat -keyalg RSA -keystore /home/ashraf/Desktop/JavaCodeGeek/keystore.jks -validity 360You can list the certificate details you just created using this command keytool -list -keystore /home/ashraf/Desktop/JavaCodeGeek/keystore.jksDownload Tomcat 7 Configure Tomcat’s server to support for SSL or https connection. Adding a connector element in Tomcat\conf\server.xml <Connector port="8443" maxThreads="150" scheme="https" secure="true" SSLEnabled="true" keystoreFile="/home/ashraf/Desktop/JavaCodeGeek/.keystore" keystorePass="password" clientAuth="false" keyAlias="tomcat" sslProtocol="TLS" />Start Tomcat and go tohttps://localhost:8443/, you will find the following security issue where the browser will present untrusted error messages. In the case of e-commerce, such error messages result in immediate lack of confidence in the website and organizations risk losing confidence and business from the majority of consumers, that's normal as your certificate isn't signed yet by CA such as Thawte or Verisign who will verify the identity of the requester and issue a signed certificate.You can click Proceed anyway till you receive you signed certificate....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books