Featured FREE Whitepapers

What's New Here?

javafx-logo

JavaFX Tip 13: Study Modena CSS File

This is the easiest and shortest tip so far. If you want to do any of the following things:learn how to use CSS make your custom controls look like the standard controls reuse an SVG path graphic used by a standard control (e.g. scrollbar arrows) figure out how to navigate the structure of the standard controls determine the color used for a specific item consistently modify several standard controlsthen simply take a look at the default CSS stylesheet that ships with JavaFX. The file is called modena.css and can be found in the jfxrt.jar file in this location: com/sun/javafx/scene/control/skin/modena/.Reference: JavaFX Tip 13: Study Modena CSS File from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
junit-logo

JUnit in a Nutshell: Test Structure

Despite the existence of books and articles about JUnit testing, I still meet quite often programmers, who at most have a vague understanding of the tool and its proper usage. Hence I had the idea to write a multi-part tutorial, that explains the essentials from my point of view. Maybe the hands-on approach taken in this mini-series might be appropriate to get one or two additional developers interested in unit testing – which would make the effort worthwhile. Last time I introduced the very basics of a test – how it is written, executed and evaluated. While doing so I outlined that a test is more than a simple verification machine and can serve also as kind of low level specification. Therefore it should be developed with the highest possible coding standards one could think of. This post will continue with the tutorial’s example and work out the common structure that charactarizes well written unit tests, using the nomenclature defined by Meszaros in xUnit Test Patterns [MES]. The Four Phases of a TestA tidy house, a tidy mind Old AdageThe tutorial’s example is about writing a simple number range counter, which delivers a certain amount of consecutive integers, starting from a given value. Beginning with the happy path the last post’s outcome was a test which verified, that the NumberRangeCounter returns consecutive numbers on subsequent invocations of the method next: @Test public void subsequentNumber() { NumberRangeCounter counter = new NumberRangeCounter();int first = counter.next(); int second = counter.next();assertEquals( first + 1, second ); } Note that I stick with the JUnit build-in functionality for verification in this chapter. I will cover the pro and cons of particular matcher libraries (Hamcrest, AssertJ) in a separate post. The attentive reader may have noticed that I use empty lines to separate the test into distinct segments and probably wonders why. To answer this question let us look at each of the three sections more closely:The first one creates an instance of the object to be tested, referred to as SUT (System Under Test). In general this section establishs the SUT’s state prior any test related activities. As this state constitutes a well defined test input, it is also denoted as fixture of a test. After the fixture has been established it is about time to invoke those methods of the SUT, which represent a certain behavior the test intends to verify. Often this is just a single method and the outcome is stored in local variables. The last section of the test is responsible to verify whether the expected outcome of a given behavior has been obtained. Although there is a school of thought propagating a one-assert-per-test policy, I prefer the single-concept-per-test idea, which means that this section is not limited to just one assertion as it happen to be in the example [MAR1]. This test structure is very common and have been described by various authors. It has been labeled as arrange, act, assert [KAC] – or build, operate, check [MAR2] – pattern. But for this tutorial I like to be precise and stick with Meszaros’ [MES] four phases called setup (1), exercise (2), verify (3) and teardown (4). The teardown phase is about cleaning up the fixture in case it is persistent. Persistent means the fixture or part of it would survive the end of a test and may have bad influence on the results of its successor.Plain unit tests seldomly use persistent fixtures so the teardown phase is – as in our example – often omitted. And as it is completely irrelevant from the specification angle, we like to keep it out of the test method anyway. How this can be achieved is covered in a minute. Due to the scope of this post I avoid a precise definition of a unit test. But I hold on to the three types of developers’ tests Tomek Kaczanowski describes in Practical Unit Testing with JUnit and Mockito and can be summarized to:Unit tests make sure that your code works and have to run often and therefore incredibly quickly. Which is basically what this tutorial is all about. Integration tests focus on the proper integration of different modules, including code over which developers have no control. This usually requires some resources (e.g. database, filesystem) and because of this the tests run more slowly. End-to-End tests verify that your code works from the client’s point of view and put the system as a whole to the test, mimicking the way the user would use it. They usually require a signification amount of time to execute themselves. And for an in-depth example of how to combine these testing types effectively you might have a look at Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce.But before we go ahead with the example there is one question left to be discussed: Why is this Important?The ratio of time spent reading (code) versus writing is well over 10 to 1… Robert C. Martin, Clean CodeThe purpose of the four phases pattern is to make it easy to understand what behavior a test is verifying. Setup always defines the test’s precondition, exercise actually invokes the behavior under test, verify specifies the expected outcome and teardown is all about housekeeping, as Meszaros puts it. This clean phase separation signals the intention of a single test clearly and increases readability. The approach implies that a test verifies only one behavior for a given input state at a time and therefore usually does without conditional blocks or the like (Single-Condition Test). While it is tempting to avoid tedious fixture setup and test as much functionallity as possible within a single method, this usually leads to some kind of obfuscation by nature. So always remember: A test, if not written with care, can be a pain in the ass regarding maintenance and progression. But now it is time to proceed with the example and see what this new knowledge can do for us! Corner Case Tests Once we are done with the happy path test(s) we continue by specifying the corner case behavior. The description of the number range counter states that the sequence of numbers should start from a given value. Which is important as it defines the lower bound (one corner…) of a counter’s range. It seems reasonable that this value is passed as configuration parameter to the NumberRangeCounter‘s constructor. An appropriate test could verify that the first number returned by next is equal to this initialization: @Test public void lowerBound() { NumberRangeCounter counter = new NumberRangeCounter( 1000 );int actual = counter.next(); assertEquals( 1000, actual ); } Once again our test class does not compile. Fixing this by introducing a lowerBound parameter to the counter’s constructor, leads to an compile error in the subsequentNumber test. Luckily the latter test has been written to be independent from the lower bound definition, so the parameter can be used by the fixture of this test, too. However the literal number in the test is redundant and does not indicate its purpose clearly. The latter is usually denoted as magic number. To improve the situation we could introduce a constant LOWER_BOUND and replace all literal values. Here is what the test class would look like afterwards: public class NumberRangeCounterTest { private static final int LOWER_BOUND = 1000;@Test public void subsequentNumber() { NumberRangeCounter counter = new NumberRangeCounter( LOWER_BOUND ); int first = counter.next(); int second = counter.next(); assertEquals( first + 1, second ); } @Test public void lowerBound() { NumberRangeCounter counter = new NumberRangeCounter( LOWER_BOUND );int actual = counter.next(); assertEquals( LOWER_BOUND, actual ); } } Looking at the code one may notice that the fixture’s in-line setup is the same for both tests. Usually an in-line setup is composed of more than a single statement, but there are often commonalities between the tests. To avoid redundancy the things in common can be delegated to a setup method: public class NumberRangeCounterTest { private static final int LOWER_BOUND = 1000;@Test public void subsequentNumber() { NumberRangeCounter counter = setUp(); int first = counter.next(); int second = counter.next(); assertEquals( first + 1, second ); } @Test public void lowerBound() { NumberRangeCounter counter = setUp();int actual = counter.next(); assertEquals( LOWER_BOUND, actual ); } private NumberRangeCounter setUp() { return new NumberRangeCounter( LOWER_BOUND ); } } While it is debatable if the delegate setup approach improves readability for the given case, it leads to an interesting feature of JUnit: the possibility to execute a common test setup implicitly. This can be achieved with the annotation @Before applied to a public, non static method that does without return value and parameters. Which means this feature comes to a price. If we want to eliminate the redundant setUp calls within the tests we have to introduce a field that takes the instance of our NumberRangeCounter: public class NumberRangeCounterTest { private static final int LOWER_BOUND = 1000; private NumberRangeCounter counter; @Before public void setUp() { counter = new NumberRangeCounter( LOWER_BOUND ); }@Test public void subsequentNumber() { int first = counter.next(); int second = counter.next(); assertEquals( first + 1, second ); } @Test public void lowerBound() { int actual = counter.next(); assertEquals( LOWER_BOUND, actual ); } } It is easy to see that implicit setup can remove a lot of code duplication. But it also introduces a kind of magic from the view point of a test, which can make it difficult to read. So the clear answer to the question ‘Which kind of setup type should I use?’ is: it depends… As I usually pay attention to keep units/tests small, the trade off seems acceptable. So I often use the implicit setup to define the common/happy path input and supplement it accordingly by small in-line/delegate setup for each of the corner case tests. Otherwise as in particular beginners tend to let tests grow to large, it might be better to stick with in-line and delegate setup first. The JUnit runtime ensures that each test gets invoked on a new instance of the test’s class. This means the constructor only fixture in our example could omit the setUp method completely. Assignment of the counter field with a fresh fixture could be done implicitly: private NumberRangeCounter counter = new NumberRangeCounter( LOWER_BOUND ); While some people use this a lot, other people argue that a @Before annotated method makes the intention more explicit. Well, I would not go on war over this and leave the decision to your personal taste… Implicit Teardown Imagine for a moment that NumberRangeCounter needs to be disposed of for whatever reason. Which means we have to append a teardown phase to our tests. Based on our latest snippet this would be easy with JUnit, as it supports implicit teardown using the @After annotation. We would only have to add the following method: @After public void tearDown() { counter.dispose(); } As mentioned above teardown is all about housekeeping and adds no information at all to a particular test. Because of this it is very often convenient to perform this implicitly. Alternatively one would have to handle this with a try-finally construct to ensure that teardown is executed, even if a test fails. But the latter usually does not improve readability. Expected Exceptions A particular corner case is testing expected exceptions. Consider for the sake of the example that NumberRangeCalculator should throw an IllegalStateException if a call of next exceeds the amount of values for a given range. Again it might be reasonable to configure the range via a constructor parameter. Using a try-catch construct we could write: @Test public void exeedsRange() { NumberRangeCounter counter = new NumberRangeCounter( LOWER_BOUND, 0 );try { counter.next(); fail(); } catch( IllegalStateException expected ) { } } Well, this looks somewhat ugly as it blurs the separtion of the test phases and is not very readable. But since Assert.fail() throws an AssertionError it ensures that the test fails if no exception is thrown. And the catch block ensures that the test completes successfully in case the expected exception is thrown. With Java 8 it is possible to write cleanly structured exception tests using lambda expressions. For more information please refer to Clean JUnit Throwable-Tests with Java 8 Lambdas. If it is enough to verify that a certain type of exception has been thrown, JUnit offers implicit verification via the expected method of the @Test annotation. The test above could then be written as: @Test( expected = IllegalStateException.class ) public void exeedsRange() { new NumberRangeCounter( LOWER_BOUND, ZERO_RANGE ).next(); } While this approach is very compact it also can be dangerous. This is because it does not distinct whether the given exception was thrown during the setup or the exercise phase of a test. So the test would be green – and hence worthless – if accidently an IllegalStateException would be thrown by the constructor. JUnit offers a third possibility for testing expected exceptions more cleanly, the ExpectedException rule. As we have not covered Rules yet and the approach twists a bit the four phase structure, I postpone the explicit discussion of this topic to a follow-up post about rules and runners and provide only a snippet as teaser: public class NumberRangeCounterTest { private static final int LOWER_BOUND = 1000;@Rule public ExpectedException thrown = ExpectedException.none();@Test public void exeedsRange() { thrown.expect( IllegalStateException.class ); new NumberRangeCounter( LOWER_BOUND, 0 ).next(); }[...] } However if you do not want to wait you might have a look at Rafał Borowiec‘s thorough explanations in his post JUNIT EXPECTEDEXCEPTION RULE: BEYOND BASICS Conclusion This chapter of JUnit in a Nutshell explained the four phase structure commonly used to write unit tests – setup, exercise, verify and teardown. It described the purpose of each phase and emphasized on how it improves readability of test cases when consistently used. The example deepened this learning material in the context of corner case tests. It was hopefully well-balanced enough to provide a comprehensible introduction without being trivial. Suggestions for improvements are of course highly appreciated. The next chapter of the tutorial will continue the example and cover how to deal with unit dependencies and test isolation, so stay tuned. References[MES] xUnit Test Patterns, Chapter 19, Four-Phase Test, Gerard Meszaros, 2007 [MAR1] Clean Code, Chapter 9: Unit Tests, page 130 et seqq, Robert C. Martin, 2009 [KAC] Practical Unit Testing with JUnit and Mockito, 3.9. Phases of a Unit Test, Tomek Kaczanowski, 2013 [MAR2] Clean Code, Chapter 9: Unit Tests, page 127, Robert C. Martin, 2009Reference: JUnit in a Nutshell: Test Structure from our JCG partner Rudiger Herrmann at the Code Affine blog....
software-development-2-logo

All You Ever Need to Know About Recursive SQL

Oracle SYNONYMs are a great feature. You can implement all sorts of backwards-compatibility tweaks simply by creating SYNONYMs in your database. Consider the following schema:                 CREATE TABLE my_table (col NUMBER(7));CREATE SYNONYM my_table_old FOR my_table; CREATE SYNONYM my_table_bak FOR my_table_old; Now you can query your same old table through three different names, it’ll all result in the same output: SELECT * FROM my_table;-- Same thing: SELECT * FROM my_table_old; SELECT * FROM my_table_bak; The trouble is, when you see my_table_bak in code (or some even more obfuscated name), do you immediately know what it really is? Use this query to find out We can use the ALL_SYNONYMS table to figure this one out. This query will already give a simple overview: SELECT * FROM ALL_SYNONYMS WHERE TABLE_OWNER = 'PLAYGROUND' The output is: OWNER SYNONYM_NAME TABLE_OWNER TABLE_NAME --------------------------------------------------- PLAYGROUND MY_TABLE_BAK PLAYGROUND MY_TABLE_OLD PLAYGROUND MY_TABLE_OLD PLAYGROUND MY_TABLE But as you can see, this is boring, because we have transitive synonyms in there and I don’t want to go through the complete table to figure out that MY_TABLE_BAK -> MY_TABLE_OLD -> MY_TABLE. So let’s use CONNECT BY! Oracle (as well as Informix and CUBRID) have this awesome CONNECT BY clause for hierarchical SQL. There is also the possibility to express hierarchical SQL using the more powerful common table expressions, if you dare. But let’s see how we can transitively resolve our tables. Here’s how: SELECT s.OWNER, s.SYNONYM_NAME,-- Get to the root of the hierarchy CONNECT_BY_ROOT s.TABLE_OWNER TABLE_OWNER, CONNECT_BY_ROOT s.TABLE_NAME TABLE_NAME FROM ALL_SYNONYMS s WHERE s.TABLE_OWNER = 'PLAYGROUND'-- The magic CONNECT BY clause! CONNECT BY s.TABLE_OWNER = PRIOR s.OWNER AND s.TABLE_NAME = PRIOR s.SYNONYM_NAME First off, there is CONNECT BY, which allows to “connect” hierarchies by their hierarchical predecessors. On each level of the hierarchy, we’ll connect the TABLE_NAME with its previous (“PRIOR”) SYNONYM_NAME. This will recurse as long as the chain doesn’t end (or if it runs into a cycle). What’s also interesting is the CONNECT_BY_ROOT keyword, which, for each path through the hierarchy, displays the root of the path. In our case, that’s the target TABLE_NAME. The output can be seen here: OWNER SYNONYM_NAME TABLE_OWNER TABLE_NAME --------------------------------------------------- PLAYGROUND MY_TABLE_OLD PLAYGROUND MY_TABLE PLAYGROUND MY_TABLE_BAK PLAYGROUND MY_TABLE PLAYGROUND MY_TABLE_BAK PLAYGROUND MY_TABLE_OLD <-- Useless If you’re confused by the records that are displayed, just add the LEVEL pseudo-column to display the recursion level: SELECT-- Add level here LEVEL, s.OWNER, s.SYNONYM_NAME, CONNECT_BY_ROOT s.TABLE_OWNER TABLE_OWNER, CONNECT_BY_ROOT s.TABLE_NAME TABLE_NAME FROM ALL_SYNONYMS s WHERE s.TABLE_OWNER = 'PLAYGROUND' CONNECT BY s.TABLE_OWNER = PRIOR s.OWNER AND s.TABLE_NAME = PRIOR s.SYNONYM_NAME LEVEL OWNER SYNONYM_NAME TABLE_OWNER TABLE_NAME ---------------------------------------------------------- 1 PLAYGROUND MY_TABLE_OLD PLAYGROUND MY_TABLE 2 PLAYGROUND MY_TABLE_BAK PLAYGROUND MY_TABLE 1 PLAYGROUND MY_TABLE_BAK PLAYGROUND MY_TABLE_OLD ^^^^^^ Awesome! Getting rid of “bad records” using START WITH As you can see, some of the results are now synonyms pointing directly to the target table, whereas the last record still points to an intermediate element from the synonym path. This is because we’re recursing into the path hierarchies from every record in the table, also from the “intermediate” synonym references, whose TABLE_NAME is yet another synonym. Let’s get rid of those as well, using the optional START WITH clause, which allows to limit tree traversals to those trees whose roots fulfil a given predicate: SELECT s.OWNER, s.SYNONYM_NAME, CONNECT_BY_ROOT s.TABLE_OWNER TABLE_OWNER, CONNECT_BY_ROOT s.TABLE_NAME TABLE_NAME FROM ALL_SYNONYMS s WHERE s.TABLE_OWNER = 'PLAYGROUND' CONNECT BY s.TABLE_OWNER = PRIOR s.OWNER AND s.TABLE_NAME = PRIOR s.SYNONYM_NAME-- Start recursing only from non-synonym objects START WITH EXISTS ( SELECT 1 FROM ALL_OBJECTS WHERE s.TABLE_OWNER = ALL_OBJECTS.OWNER AND s.TABLE_NAME = ALL_OBJECTS.OBJECT_NAME AND ALL_OBJECTS.OWNER = 'PLAYGROUND' AND ALL_OBJECTS.OBJECT_TYPE <> 'SYNONYM' ) So, essentially, we’re requiring the TABLE_NAME to be any object from ALL_OBJECTS that is in our schema, but not a SYNONYM. (yes, synonyms work for all objects, including procedures, packages, types, etc.) Running the above query gets us the desired result: OWNER SYNONYM_NAME TABLE_OWNER TABLE_NAME --------------------------------------------------- PLAYGROUND MY_TABLE_OLD PLAYGROUND MY_TABLE PLAYGROUND MY_TABLE_BAK PLAYGROUND MY_TABLE What about PUBLIC synonyms? Most often, you will not use local synonyms, though, but PUBLIC ones. Oracle has this quirky PUBLIC pseudo-schema, in which you cannot create objects, but in which you can create synonyms. So, let’s create some more synonyms for backwards-compatibility purposes: CREATE PUBLIC SYNONYM my_table_bak2 FOR my_table_bak; CREATE SYNONYM bak_backup_old FOR my_table_bak2; Unfortunately, this will break our chain, because for some reason only Oracle and the Oracle of Delphi knows, PUBLIC is well reported as a OWNER of the synonym, but not as the TABLE_OWNER. Let’s see some raw data with: SELECT * FROM ALL_SYNONYMS WHERE TABLE_OWNER = 'PLAYGROUND' … and thus: OWNER SYNONYM_NAME TABLE_OWNER TABLE_NAME ------------------------------------------------------ PLAYGROUND MY_TABLE_OLD PLAYGROUND MY_TABLE PLAYGROUND MY_TABLE_BAK PLAYGROUND MY_TABLE_OLD PUBLIC MY_TABLE_BAK2 PLAYGROUND MY_TABLE_BAK PLAYGROUND BAK_BACKUP_OLD PLAYGROUND MY_TABLE_BAK2 <-- Not PUBLIC As you can see, the PUBLIC SYNONYM MY_TABLE_BAK2 is reported to be in the PLAYGROUND schema! This breaks recursion, of course. We’re missing a record: OWNER SYNONYM_NAME TABLE_OWNER TABLE_NAME ------------------------------------------------------ PLAYGROUND MY_TABLE_OLD PLAYGROUND MY_TABLE PLAYGROUND MY_TABLE_BAK PLAYGROUND MY_TABLE PUBLIC MY_TABLE_BAK2 PLAYGROUND MY_TABLE <-- Hmm? In order to work around this issue, we’ll have to tweak our original data set. Any object reported as (TABLE_OWNER, TABLE_NAME) might in fact be a synonym called ('PUBLIC', TABLE_NAME). The trick is thus to simply duplicate all input data as such: SELECT s.OWNER, s.SYNONYM_NAME, CONNECT_BY_ROOT s.TABLE_OWNER TABLE_OWNER, CONNECT_BY_ROOT s.TABLE_NAME TABLE_NAME-- Tweaked data set FROM ( SELECT OWNER, SYNONYM_NAME, TABLE_OWNER, TABLE_NAME FROM ALL_SYNONYMS UNION ALL SELECT OWNER, SYNONYM_NAME, 'PUBLIC', TABLE_NAME FROM ALL_SYNONYMS ) s-- Add the synthetic PUBLIC TABLE_OWNER as well WHERE s.TABLE_OWNER IN ( 'PLAYGROUND', 'PUBLIC' ) CONNECT BY s.TABLE_OWNER = PRIOR s.OWNER AND s.TABLE_NAME = PRIOR s.SYNONYM_NAME START WITH EXISTS ( SELECT 1 FROM ALL_OBJECTS WHERE s.TABLE_OWNER = ALL_OBJECTS.OWNER AND s.TABLE_NAME = ALL_OBJECTS.OBJECT_NAME AND ALL_OBJECTS.OWNER = 'PLAYGROUND' AND ALL_OBJECTS.OBJECT_TYPE <> 'SYNONYM' ) There it is, our missing record! OWNER SYNONYM_NAME TABLE_OWNER TABLE_NAME --------------------------------------------------- PLAYGROUND MY_TABLE_OLD PLAYGROUND MY_TABLE PLAYGROUND MY_TABLE_BAK PLAYGROUND MY_TABLE PUBLIC MY_TABLE_BAK2 PLAYGROUND MY_TABLE PLAYGROUND BAK_BACKUP_OLD PLAYGROUND MY_TABLE <-- Yep! Displaying the hierarchy There is also a quirky function called SYS_CONNECT_BY_PATH, which can be used to actually display the whole hierarchy in a string form (VARCHAR2, with max 4000 characters!). Here’s how: SELECT-- Magic function SUBSTR( sys_connect_by_path( s.TABLE_OWNER || '.' || s.TABLE_NAME, ' <- ' ) || ' <- ' || s.OWNER || '.' || s.SYNONYM_NAME, 5 ) FROM ( SELECT OWNER, SYNONYM_NAME, TABLE_OWNER, TABLE_NAME FROM ALL_SYNONYMS UNION ALL SELECT OWNER, SYNONYM_NAME, 'PUBLIC', TABLE_NAME FROM ALL_SYNONYMS ) s WHERE s.TABLE_OWNER IN ( 'PLAYGROUND', 'PUBLIC' ) CONNECT BY s.TABLE_OWNER = PRIOR s.OWNER AND s.TABLE_NAME = PRIOR s.SYNONYM_NAME START WITH EXISTS ( SELECT 1 FROM ALL_OBJECTS WHERE s.TABLE_OWNER = ALL_OBJECTS.OWNER AND s.TABLE_NAME = ALL_OBJECTS.OBJECT_NAME AND ALL_OBJECTS.OWNER = 'PLAYGROUND' AND ALL_OBJECTS.OBJECT_TYPE <> 'SYNONYM' ) The above query will now output the following records: PLAYGROUND.MY_TABLE <- PLAYGROUND.MY_TABLE_OLD PLAYGROUND.MY_TABLE <- PLAYGROUND.MY_TABLE_OLD <- PLAYGROUND.MY_TABLE_BAK PLAYGROUND.MY_TABLE <- PLAYGROUND.MY_TABLE_OLD <- PLAYGROUND.MY_TABLE_BAK <- PUBLIC.MY_TABLE_BAK2 PLAYGROUND.MY_TABLE <- PLAYGROUND.MY_TABLE_OLD <- PLAYGROUND.MY_TABLE_BAK <- PUBLIC.MY_TABLE_BAK2 <- PLAYGROUND.BAK_BACKUP_OLD Impressive, eh? Remark: In case you have stale synonyms If you have “stale” synonyms, i.e. synonyms that point to nowhere, Oracle may report them to be pointing to themselves. That’s unfortunate and creates a CYCLE in CONNECT BY. To prevent this from happening, simply add another predicate like so: SELECT SUBSTR( sys_connect_by_path( s.TABLE_OWNER || '.' || s.TABLE_NAME, ' <- ' ) || ' <- ' || s.OWNER || '.' || s.SYNONYM_NAME, 5 ) FROM ( SELECT * FROM ( SELECT OWNER, SYNONYM_NAME, TABLE_OWNER, TABLE_NAME FROM ALL_SYNONYMS UNION ALL SELECT OWNER, SYNONYM_NAME, 'PUBLIC', TABLE_NAME FROM ALL_SYNONYMS ) s-- Add this predicate to prevent cycles WHERE (s.OWNER , s.SYNONYM_NAME) != ((s.TABLE_OWNER , s.TABLE_NAME)) ) s CONNECT BY s.TABLE_OWNER = PRIOR s.OWNER AND s.TABLE_NAME = PRIOR s.SYNONYM_NAME START WITH EXISTS ( SELECT 1 FROM ALL_OBJECTS WHERE s.TABLE_OWNER = ALL_OBJECTS.OWNER AND s.TABLE_NAME = ALL_OBJECTS.OBJECT_NAME AND ALL_OBJECTS.OWNER = 'PLAYGROUND' AND ALL_OBJECTS.OBJECT_TYPE <> 'SYNONYM' ) Can the above query be written in jOOQ? Yes of course. In jOOQ, pretty much everything is possible, if you can write it in SQL. Here’s how we use a query similar to the above to resolve Oracle Synonmys in the jOOQ code generator: // Some reusable variables AllObjects o = ALL_OBJECTS; AllSynonyms s1 = ALL_SYNONYMS; AllSynonyms s2 = ALL_SYNONYMS.as("s2"); AllSynonyms s3 = ALL_SYNONYMS.as("s3");Field<String> dot = inline("."); String arr = " <- ";// The actual qeury DSL .using(configuration) .select( s3.OWNER, s3.SYNONYM_NAME, connectByRoot(s3.TABLE_OWNER).as("TABLE_OWNER"), connectByRoot(s3.TABLE_NAME).as("TABLE_NAME"), substring( sysConnectByPath( s3.TABLE_OWNER.concat(dot) .concat(s3.TABLE_NAME), arr ) .concat(arr) .concat(s3.OWNER) .concat(dot) .concat(s3.SYNONYM_NAME), 5 )) .from( select() .from( select( s1.OWNER, s1.SYNONYM_NAME, s1.TABLE_OWNER, s1.TABLE_NAME) .from(s1) .union( select( s1.OWNER, s1.SYNONYM_NAME, inline("PUBLIC"), s1.TABLE_NAME) .from(s1)) .asTable("s2")) .where(row(s2.OWNER, s2.SYNONYM_NAME) .ne(s2.TABLE_OWNER, s2.TABLE_NAME)) .asTable("s3")) .connectBy(s3.TABLE_OWNER.eq(prior(s3.OWNER))) .and(s3.TABLE_NAME.eq(prior(s3.SYNONYM_NAME))) .startWith(exists( selectOne() .from(o) .where(s3.TABLE_OWNER.eq(o.OWNER)) .and(s3.TABLE_NAME.eq(o.OBJECT_NAME)) .and(o.OBJECT_TYPE.ne("SYNONYM")) .and(o.OWNER.in(getInputSchemata())) )) .fetch(); Download jOOQ today and try it yourself! Conclusion If you have an intrinsically hierarchical data set, then you will be very unhappy with these simplistic hierarchical SQL features (also with commont table expressions). They don’t perform very well, and they’re very hard to express if hierarchies get more complex. So you may as well consider using an actual graph database like Neo4j. But every now and then, a little hierarchy may sneak into your otherwise “standard” relational data model. When it does, be sure to have this useful CONNECT BY clause ready for action. CONNECT BY is supported by (at least):CUBRID Informix OracleRecursive common table expressions (the SQL standard’s counterpart for CONNECT BY are supported by (at least):DB2 Firebird HSQLDB Oracle PostgreSQL SQL Server Sybase SQL Anywhereand…H2 has some experimental supportIn a future post, we’re going to be looking into how to do the same thing with recursive CTE.Reference: All You Ever Need to Know About Recursive SQL from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
spring-interview-questions-answers

Validation groups in Spring MVC

Validation constraints in Bean Validation may be added to one or more groups via groups attribute. This allows you to restrict the set of constraints applied during validation. It can be handy in cases where some groups should be validated before others like e.g. in wizards. As of Spring MVC 3.1, automatic validation utilizing validation groups is possible with org.springframework.validation.annotation.Validated annotation. In this article I will use simple Spring MVC application to demonstrate how easily you can use validation groups to validate Spring’s MVC model attributes. Form Let’s start with the form class that will be validated in steps. Firstly, we define interfaces that represents constraint groups: public class Account implements PasswordAware {interface ValidationStepOne { // validation group marker interface }interface ValidationStepTwo { // validation group marker interface } } Validation contraints Next we assign constraint to groups. Remember, if you don’t provide groups the default one will be used. Please also note @SamePasswords, @StrongPassword – custom constraints, that must define groups attribute: @SamePasswords(groups = {Account.ValidationStepTwo.class}) public class Account implements PasswordAware {@NotBlank(groups = {ValidationStepOne.class}) private String username;@Email(groups = {ValidationStepOne.class}) @NotBlank(groups = {ValidationStepOne.class}) private String email;@NotBlank(groups = {ValidationStepTwo.class}) @StrongPassword(groups = {ValidationStepTwo.class}) private String password;@NotBlank(groups = {ValidationStepTwo.class}) private String confirmedPassword;// getters and setters } Wizard Having the Account, we can create a 3-step wizard @Controller that will let users create an account. In first step we will let Spring validate constraint in ValidationStepOne group: @Controller @RequestMapping("validationgroups") @SessionAttributes("account") public class AccountController {@RequestMapping(value = "stepOne") public String stepOne(Model model) { model.addAttribute("account", new Account()); return VIEW_STEP_ONE; }@RequestMapping(value = "stepOne", method = RequestMethod.POST) public String stepOne(@Validated(Account.ValidationStepOne.class) Account account, Errors errors) { if (errors.hasErrors()) { return VIEW_STEP_ONE; } return "redirect:stepTwo"; } } To trigger validation with groups I used @Validated annotation. This annotation takes var-arg argument with groups’ types. The code @Validated(ValidationStepOne.class) triggers validation of constraint in ValidationStepOne group. In the next step we will let Spring validate constraint in ValidationStepTwo group: @Controller @RequestMapping("validationgroups") @SessionAttributes("account") public class AccountController {@RequestMapping(value = "stepTwo") public String stepTwo() { return VIEW_STEP_TWO; }@RequestMapping(value = "stepTwo", method = RequestMethod.POST) public String stepTwo(@Validated(Account.ValidationStepTwo.class) Account account, Errors errors) { if (errors.hasErrors()) { return VIEW_STEP_TWO; } return "redirect:summary"; } } In the summary step we will confirm entered data and we will let Spring validate constraint of both groups: @Controller @RequestMapping("validationgroups") @SessionAttributes("account") public class AccountController {@RequestMapping(value = "summary") public String summary() { return VIEW_SUMMARY; }@RequestMapping(value = "confirm") public String confirm(@Validated({Account.ValidationStepOne.class, Account.ValidationStepTwo.class}) Account account, Errors errors, SessionStatus status) { status.setComplete(); if (errors.hasErrors()) { // did not pass full validation } return "redirect:start"; } } Prior to Spring 3.1 you could trigger the validation manually. I described this in one of my previous posts: http://blog.codeleak.pl/2011/03/how-to-jsr303-validation-groups-in.html Note: If you want to use validation groups without Spring, you need to pass groups to javax.validation.Validator#validate(): Validation groups without Spring Validator validator = Validation .buildDefaultValidatorFactory().getValidator(); Account account = new Account();// validate with first group Set<ConstraintViolation<Account>> constraintViolations = validator.validate(account, Account.ValidationStepOne.class); assertThat(constraintViolations).hasSize(2);// validate with both groups Set<ConstraintViolation<Account>> constraintViolations = validator.validate(account, Account.ValidationStepOne.class, Account.ValidationStepTwo.class); assertThat(constraintViolations).hasSize(4); This is also the easiest way to test validations: public class AccountValidationTest {private Validator validator = Validation.buildDefaultValidatorFactory().getValidator();@Test public void shouldHaveFourConstraintViolationsWhileValidatingBothGroups() { Account account = new Account(); Set<ConstraintViolation<Account>> constraintViolations = validator.validate( account, Account.ValidationStepOne.class, Account.ValidationStepTwo.class ); assertThat(constraintViolations).hasSize(4); }@Test public void shouldHaveTwoConstraintViolationsWhileStepOne() { Account account = new Account(); Set<ConstraintViolation<Account>> constraintViolations = validator.validate( account, Account.ValidationStepOne.class ); assertThat(constraintViolations).hasSize(2);} } Testing validation with Spring Test Testing validation with Spring Test offers more sophisticated way of testing if validation/binding failed. For the examples, have a look at my other blog post: Spring MVC Integration Testing: Assert the given model attribute(s) have global errorsThe source code for this article can be found here: https://github.com/kolorobot/spring-mvc-beanvalidation11-demoReference: Validation groups in Spring MVC from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
jboss-hibernate-logo

Upgrading Spring 3.x and Hibernate 3.x to Spring Platform 1.0.1 (Spring + hibernate 4.x)

I recent volunteered to upgrade our newest project to the latest version of Spring Platform. What Spring Platform gives you is dependency & plugin management across the whole Spring framework’s set of libraries. Since we had fallen behind a little the upgrade did raise some funnies. Here are the things I ran into:         Maven: Our pom files were still referencing: hibernate.jar  ehcache.jar  These artefacts don’t exit on the latest version, so replaced those with hibernate-core.jar and ehcache-core.jar. We also still use the hibernate tools + maven run plugin to reverse engineer our db object. This I needed to update to a release candidate:<hibernate-tools .version="">4.3.1.CR1</hibernate-tools> Hibernate: The code: “Hibernate.createBlob”… no longer exists replaced with: private Blob createBlob(final byte[] bytes) { return NonContextualLobCreator.INSTANCE.wrap(NonContextualLobCreator.INSTANCE.createBlob(bytes)); } On the HibernateTemplate return types are now List; not element…So needed to add casts for the lists being returned. import org.hibernate.classic.Session; replaced with: import org.hibernate.Session; Reverse engineer works a little differently… Assigns Long to numeric… Added: <type-mapping> <sql-type jdbc-type="NUMERIC" precision="4" hibernate-type="java.lang.Integer" /> <sql-type jdbc-type="NUMERIC" precision="6" hibernate-type="java.lang.Integer" /> <sql-type jdbc-type="NUMERIC" precision="8" hibernate-type="java.lang.Integer" /> <sql-type jdbc-type="NUMERIC" precision="10" hibernate-type="java.lang.Long" /> <sql-type jdbc-type="DECIMAL" precision='4' scale='0' hibernate-type="java.lang.Integer" not-null="true"/> <sql-type jdbc-type="DECIMAL" precision='6' scale='0' hibernate-type="java.lang.Integer" not-null="true"/> <sql-type jdbc-type="DATE" hibernate-type="java.util.Date"/> </type-mapping> Possible Errors:Caused by: org.hibernate.service.UnknownUnwrapTypeException: Cannot unwrap to requested type [javax.sql.DataSource]Add a dependency for c3p0: <dependency> <groupid>org.hibernate</groupId> <artifactid>hibernate-c3p0</artifactId> <version>${hibernate.version}</version> </dependency> And configure the settings in the cfg.xml for it: <property name="hibernate.c3p0.min_size">5</property> <property name="hibernate.c3p0.max_size">20</property> <property name="hibernate.c3p0.timeout">300</property> <property name="hibernate.c3p0.max_statements">50</property> <property name="hibernate.c3p0.idle_test_period">3000</property>Caused by: java.lang.ClassNotFoundException: org.hibernate.engine.FilterDefinitionProbably still using a reference to hibernate3 factory / bean somewhere, change to hibernate4: org.springframework.orm.hibernate3.LocalSessionFactoryBean org.springframework.orm.hibernate3.HibernateTransactionManagerCaused by: java.lang.ClassNotFoundException: Could not load requested class : org.hibernate.hql.classic.ClassicQueryTranslatorFactory There is minor change in new APIs, so this can be resolved by replacing property value with:org.hibernate.hql.internal.classic.ClassicQueryTranslatorFactory. Spring: Amazingly some of our application context files still referenced the Spring DTD … replaced with XSD: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.springframework.org/schema/beans" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> In Spring configs added for c3p0: <prop key="hibernate.c3p0.min_size">5</prop> <prop key="hibernate.c3p0.max_size">20</prop> <prop key="hibernate.c3p0.timeout">300</prop> <prop key="hibernate.c3p0.max_statements">50</prop> <prop key="hibernate.c3p0.idle_test_period">3000</prop> Spring removed the “local”=: so needed to just change that to “ref”= Spring HibernateDaoSupport no longer has: “releaseSession(session);”, which is a good thing so was forced to update the code to work within a transaction. Possible Errors:getFlushMode is not valid without active transaction; nested exception is org.hibernate.HibernateException: getFlushMode is not valid without active transactionRemoved from hibernate properties: <prop key="hibernate.current_session_context_class">thread</prop>  Supply a custom strategy for the scoping of the “current”Session. See Section 2.5, “Contextual sessions” for more information about the built-in strategies org.springframework.dao.InvalidDataAccessApiUsageException: Write operations are not allowed in read-only mode (FlushMode.MANUAL): Turn your Session into FlushMode.COMMIT/AUTO or remove ‘readOnly’ marker from transaction definition.Another option is : <bean id ="productHibernateTemplate" class="org.springframework.orm.hibernate4.HibernateTemplate"> <property name="sessionFactory" ref="productSessionFactory"/> <property name="checkWriteOperations" value="false"/> </bean>java.lang.NoClassDefFoundError: javax/servlet/SessionCookieConfigServlet version update: <dependency> <groupid>javax.servlet</groupId> <artifactid>servlet-api</artifactId> <version>3.0.1</version> </dependency>Then deploying on weblogic javassist: $$_javassist_  cannot be cast to javassist.util.proxy.ProxyThe issue here was that there were different versions of javassist being brought into the ear. I all references removed from all our poms, so that the correct version gets pulled in from from Spring/Hibernate… and then configured weblogic to prefer our version: <?xml version="1.0" encoding="UTF-8"?> <weblogic-application> <application-param> <param-name>webapp.encoding.default</param-name> <param-value>UTF-8</param-value> </application-param> <prefer-application-packages> <package-name>javax.jws.*</package-name> <package-name>org.apache.xerces.*</package-name> <package-name>org.apache.xalan.*</package-name> <package-name>org.apache.commons.net.*</package-name> <package-name>org.joda.*</package-name> <package-name>javassist.*</package-name> </prefer-application-packages> </weblogic-application>Reference: Upgrading Spring 3.x and Hibernate 3.x to Spring Platform 1.0.1 (Spring + hibernate 4.x) from our JCG partner Brian Du Preez at the Zen in the art of IT blog....
software-development-2-logo

The “Free”, “Standard”, “Open” Software Heresy

There are those people that have a strong, dogmatic belief in what they call “Free” or “Standard” or “Open” software. One of those individuals is Jimmie (let’s call him Jimmie in this article) who has responded to an article about Java persistence by Marco Behler on TheServerSide. Let me cite Jimmie’s response here:           JPA is difficult but complete. It has a learning curve, and you’ll have surprises if you try to shortcut its complexities. But they mostly are there for a reason. Difficult stuff is difficult using JPA, that’s true. JOOQ is quick to learn. And is proprietary stuff. Not free. Only one implementation. No public review, only one body involved in its evolution. SQL-oriented, not OO (ok, they say it’s a feature). As a serious professional, learn JPA. Fully. There is no excuse for not knowing which sql queries are generated in your production app. Replacing it with a more basic framework is no solution. Let’s not go deeply into the concrete difference between JPA and jOOQ / SQL. That topic has been discussed already in lengths on Reddit. Let’s consider the essence of the comparison as perceived by Jimmie. Because, Jimmie would probably say exactly the same thing when comparing:JSF with Ext.JS or ZK PostgreSQL with Oracle MS Office or Google Docs (probably OK cause “gratis”) with LibreOffice Linux with Windows or MacOSX (although he might perform some doublethink as a Mac user)Software not being free Jimmie, Is YOUR software free and “not proprietary”? If so, how do you finance it? How do you earn a living? And why are you doing it? What really motivates you? What really motivates your customers and why? Only one implementation How many people actually do use alternatives to Hibernate and why? Are they using EclipseLink mainly because they used to use TopLink for the last 20 years and the learning curve (or benefit) to switch to Hibernate is too high? How often do you actually switch implementations? What keeps you from implementing the jOOQ API, and open-source its implementation? And most importantly: Do you always adhere to the JPA API, even if Hibernate has lots of awesome, proprietary extensions that just happen to work so much better / easier? No public review Who exactly is “public”, and what are their main interests? Did you know that one of the major driving force for the JDK is Credit Suisse, being a large customer for Oracle in the Java environment, for instance? What is your stake and relation with Credit Suisse as your “public” representative? Only one body involved in its evolution Do you say that to YOUR customers also, about your own software as well? SQL-oriented vs “a serious professional” What’s not serious about SQL? In fact, SQL is reviewed by more entities than the JLS, let alone the JPA specs. Have you ever thought about that? More basic Fair enough. But don’t forget: You probably replaced your sophisticated EJB 2.0 framework (still a standard!) from the early 2000’s by a more basic one, which was (at the time) proprietary, had only one implementation, had no public review, nor multiple bodies involved in its evolution. It was, at the time, called Hibernate. And let me take the opportunity to cite Gavin King (creator of Hibernate) about when to use Hibernate: standard OFFSET pagination contextually typed value specifications quantified comparison predicates… and of course all the details of interoperation between SQL and XQuery, one of the most popular aspects of the SQL:2011 standard! And please, learn this FULLY, regardless of whether these things are part of your specific implementation. Because as a serious professional, you shall fully learn SQL. And while you’re at that, learn also everything about execution plans, and join, fetch, buffer caching, cursor caching and all other sorts of algorithms. Because there is no excuse for not knowing which SQL transformations are generated by your database’s CBO. I know you like standards, Jimmie. But beware of the fact that there are some people out there who cannot wait for a standard to evolve to solve their problems. They may have more immediate problems. More specific problems. Simpler problems. Problems that might be solved only by proprietary software, so far. Or problems that are solved by proprietary software, that can be put into production with much less effort than your standards, Jimmie. Lower time-to-market is what your customer might consider “professional”. Not whether this or that tech is used. Someone always invents something proprietary at some time. It might just evolve into a standard. It might have been a bad idea and not evolve into anything. Or it might evolve into a standard and then be the worst standard ever. See again: EJB 2.0. I think we all agree on that, today. No, Jimmie, the world isn’t black and white. It isn’t just about standards vs. proprietary. About free (libre) vs. commercial. About free (gratis) vs. “closed”. It’s about creating value for your customer. Oh, and Jimmie. I sincerely hope you’re neither a Windows, nor a Mac user, because that wouldn’t be free, and there is only one implementation of each OS, and no public review, and only one body involved in their evolutions. And yet, the whole world runs on one of them. Thanks for your attention, Jimmie.Reference: The “Free”, “Standard”, “Open” Software Heresy from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
agile-logo

From Personas to User Stories

Summary User stories are a powerful technique to capture the product functionality from the perspective of a user or customer. But how do we discover the right stories? When should they be written and how detailed should they be? Read this post to find out my answers to these questions.           1. Start with Personas The first step towards writing the right user stories is to understand your target users and customers. After all, user stories want to tell a story about the users using the product. If you don’t know who the users are and what problem we want to solve then it’s impossible to write the right stories and you end up with a long wish list rather than a description of the relevant product functionality. Personas offer a great way to capture the users and the customers with their needs. They are fictional characters that have a name and picture; relevant characteristics such as a role, activities, behaviours, and attitudes; and a goal, which is the problem that has to be addressed or the benefit that should be provided. Let’s look at an example. Say we want to create a game for children, which is fun to play and which educates the kids about music and dancing. We would then create at least two personas, one to represent the children, and one for the parents, as the following picture illustrates.The two sample personas above use my simple yet effective persona template. It encourages you to keep your personas concise, to focus on what really matters and to leave out the rest. You can download the template from romanpichler.com/tools/persona-template where more information on writing personas and using the template is available.Once you have created a cast of characters, select a primary persona, the persona you are mainly designing and building the product for. This helps you make the right product decision and get the user experience (UX) right. In the example above, I have chosen Yasmin as the primary persona. 2. Derive Epics from the Persona Goals Once you have created your personas, use their goals personas to identify the product functionality. Ask yourself what the product should do to address the personas’ problems or to create the desired benefits for them, as the following picture shows.Start with your primary persona and capture the functionality as epics, as coarse-grained, high-level stories. Write all the epics necessary to meet the persona goals but keep them rough and sketchy at this stage. For the dance game, we could write the epics below assuming that the game will be initially launched as an iPad app:As the epics above show, the game should allow the players to select different characters, to make them dance, to choose different dance floors and music tracks, to play the game with their friends, and to post a snapshot of their game on Facebook. While epics are great to sketch the product’s functionality, there is more to your product than epics and stories: You should also capture the user interaction and the sequences in which the epics are used, the visual design of your product, and the important nonfunctional qualities such as interoperability and performance. Use, for instance, workflow diagrams, story maps, storyboards, sketches, mock-ups, and constraint cards to describe them. You can find out more about describing the different product aspects in my post “User Stories are Not Enough to Create a Great User Experience”. 3. Progressively Decompose the Epics into User Stories With a holistic but coarse-grained description of your product in place start progressively decomposing your epics. Rather than detailing all epics and writing all user stories in one go, you derive your stories step by step as the following picture shows.As long as there are some significant risks present and you are figuring out what the product should look like and do, it’s best to derive just enough user stories just in time for the next sprint. Use your sprint goal or hypothesis to determine which epics to decompose and which stories to write as the following diagram illustrates.The approach depicted above minimises the amount of detailed items in your product backlog. This makes it easier to integrate new insights derived from exposing product increments or minimum viable products (MVPs) to users and customers. Say that we want to address the risk of creating the wrong game characters by developing an executable prototype that allows us to run a usability test with selected children. We could then write the following user stories:The stories above are derived from the epics “Choose character” and “Play with character”. The resulting prototype only partially implements the two epics – just to the extent of being able to test if the characters resonate with the users. Once you understand better how to meet the customer and user needs, you can start pre-writing user stories and have a larger inventory of detailed items on your product backlog as you are unlikely to experience bigger changes to your epics and your overall backlog. 4. Get the Stories Ready Before the development team starts working on the stories, check that each user story is ready: clear, feasible, and testable.A story is clear if there is a shared understanding between the product owner and the team about its meaning. It is feasible if it can be delivered in the next sprint according to the Definition of Done. This implies that the story is small enough to fit into the sprint but also that the necessary user interface design, test, and documentation work can be carried out. In the case of the sample stories above, we would have to add acceptance criteria, ensure that the stories are small enough to fit into the next sprint, and consider creating some very rough design sketches to indicate what the characters look like. For instance, to get the story “Yas chooses the little girl” ready, we could create the following rough sketch:The sketch above complement the user story and allows the team to implement the entire story including the visual design in the next sprint. With ready user stories in place the development team is in a good position to progress your product in an effective manner.  For more details on getting user stories ready please take a look at my post “The Definition of Ready in Scrum”.Reference: From Personas to User Stories from our JCG partner Roman Pichler at the Pichler’s blog blog....
agile-logo

People Are Not Resources

My manager reviewed the org chart along with the budget. “I need to cut the budget. Which resources can we cut?” “Well, I don’t think we can cut software licenses,” I was reviewing my copy of the budget. “I don’t understand this overhead item here,” I pointed to a particular line item. “No,” he said. “I’m talking about people. Which people can we lay off? We need to cut expenses.” “People aren’t resources! People finish work. If you don’t want us to finish projects, let’s decide which projects not to do. Then we can re-allocate people, if we want. But we don’t start with people. That’s crazy.” I was vehement. My manager looked at me as if I’d grown three heads. “I’ll start wherever I want,” he said. He looked unhappy. “What is the target you need to accomplish? Maybe we can ship something earlier, and bring in revenue, instead of laying people off? You know, bring up the top line, not decrease the bottom line?” Now he looked at me as if I had four heads. “Just tell me who to cut. We have too many resources.” When managers think of people as resources, they stop thinking. I’m convinced of this. My manager was under pressure from his management to reduce his budget. In the same way that technical people under pressure to meet a date stop thinking, managers under pressure stop thinking. Anyone under pressure stops thinking. We react. We can’t consider options. That’s because we are so very human. People are resourceful. But we, the people, are not resources. We are not the same as desks, licenses, infrastructure, and other goods that people need to finish their work. We need to change the language in our organizations. We need to talk about people as people, not resources. And, that is the topic of this month’s management myth: Management Myth 32: I Can Treat People as Interchangeable Resources. Let’s change the language in our organizations. Let’s stop talking about people as “resources” and start talking about people as people. We might still need layoffs. But, maybe we can handle them with humanity. Maybe we can think of the work strategically. And, maybe, just maybe, we can think of the real resources in the organization. You know, the ones we buy with the capital equipment budget or expense budget, not operating budget. The desks, the cables, the computers. Those resources. The ones we have to depreciate. Those are resources. Not people. People become more valuable over time. Show me a desk that does that. Ha! Go read Management Myth 32: I Can Treat People as Interchangeable Resources.Reference: People Are Not Resources from our JCG partner Johanna Rothman at the Managing Product Development blog....
java-logo

Java yield-like using Stream API

Several programming languages, such as Ruby or Python to name a few, provides the yield command. Yield provides an effective way, in terms of memory consumption, to create series of values, by generating such values on demand. More information on Python Yield. Let’s consider a class or method requiring a huge amount of secure random integers. The classical approach would be to create an array or collection of such integers. Yield provides two major advantages over such approach:      yield does not require to know the length of the series in advance. yield does not require to store all values in memory.Fortunately, yield features can be used in Java 8 thanks to Stream API: import java.security.NoSuchAlgorithmException; import java.security.SecureRandom; import java.util.Date; import java.util.function.Supplier; import java.util.stream.Stream;public class Yield {private static final Integer RANDOM_INTS = 10;public static void main(String[] args) {try (Stream randomInt = generateRandomIntStream()){ Object[] randomInts = randomInt.limit(RANDOM_INTS) .sorted().toArray(); for (int i = 0; i < randomInts.length;i++) System.out.println(randomInts[i]); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } }private static Stream generateRandomIntStream() throws NoSuchAlgorithmException{ return Stream.generate(new Supplier() {final SecureRandom random = SecureRandom .getInstance("SHA1PRNG"); boolean init = false; int numGenerated = 0;@Override public Integer get() { if (!init){ random.setSeed(new Date().getTime()); init = true; System.out.println("Seeding"); } final int nextInt = random.nextInt(); System.out.println("Generated random " + numGenerated++ + ": " + nextInt); return nextInt; }}); }} Following is the output after provided code snippet is executed: Seeding Generated random 0: -896358073 Generated random 1: -1268521873 Generated random 2: 9627917 Generated random 3: -2106415441 Generated random 4: 935583477 Generated random 5: -1132421439 Generated random 6: -1324474601 Generated random 7: -1768257192 Generated random 8: -566921081 Generated random 9: 425501046 -2106415441 -1768257192 -1324474601 -1268521873 -1132421439 -896358073 -566921081 9627917 425501046 935583477 It is easy to see that Supplier is only instantiated one. Of course, we can take advantage of all Stream API features such as limit() and sorted(). The line randomInt.limit(RANDOM_INTS).sorted().toArray() triggers the generation of RANDOM_INTS values which are then sorted and stored as an array.Reference: Java yield-like using Stream API from our JCG partner Sergio Molina at the TODOdev blog....
eclipse-logo

eclipse-pmd – New PMD plugin for Eclipse

I am Eclipse user. So when I wanted to analyze my code by PMD, I needed to use “PMD for Eclipse” plugin. This plugin used to be very buggy, which was enhanced in later versions (currently 4.0.3). But the performance is really bad sometimes. Especially when you are dealing with relatively big codebase and have option “Check code after Saving” on. ecplise-pmd plugin So when I realized that there is new alternative PMD plugin called eclipse-pmd out there I evaluated it immediately with great happiness.   Installation uses modern Eclipse Marketplace method. You just need to go “Help” -> “Eclipse Marketplace…” and search for “eclipse-pmd”. Than hit “Install” and follow instructions. After installation I was a little bit confused because I didn’t find any configuration options it general settings (“Window” -> “Preferences”). I discovered that you need to turn on PMD for each project separately. Which make sense, because you can have different rule set per project. So to turn it on, right click on project -> “Preferences” -> “PMD” (there would be two PMD sections if you didn’t uninstall old PMD plugin)-> “Enable PMD for this project” -> “Add…”. Now you should pick a location of PMD ruleset file.   Unlike old PMD plugin, eclipse-pmd don’t import ruleset. It is using ruleset file directly. This is very handy, because typically you want to have it in source control. When you pull changes to ruleset file from source control system, they are applied without re-import (re-import was needed for old PMD plugin). Problem can be when you (or your team) don’t have existing ruleset. I would suggest to start with full ruleset and exclude rules you don’t want to use. Your ruleset would evolve anyway, so starting with most restrictive (default) deck make perfect sense for me. Unfortunately eclipse-pmd plugin doesn’t provide option to generate ruleset file. So I created full ruleset for PMD 5.1.1 (5.1.1 is PMD version not plugin version). I have to admit that it was created with help of old PMD plugin. You can see that I literally included all the rule categories. I would suggest to specify your set this way and exclude/configure rules explicitly as needed. Here is link to PMD site that explains how to customize your ruleset. This approach can be handy when PMD version will be updated. New rules can appear in category and they will be automatically included into your ruleset when you are listing categories, not rules individually. But you have to keep eye on new rules/categories when updating PMD version anyway, because categories often change with new PMD version. So now we should have rulset configured and working. Here are some screen shots of rules in action: When you hover over left side panel warning:When you hover over problematic snippet:When you do quick fix on problematic snippet:Generating suppress warning annotation for PMD rules is very nice feature. It also provide quick fixes for some rules. Take a look at its change log site for full list. These PMD warning sometimes clash with Eclipse native warnings, so there is possibility to make them more visible. Go to “Window” -> “Preferences” -> “General” -> “Editors” -> “Text Editors” -> “Annotations” and find “PMD Violations”.Here you can configure your own style of highlighting PMD issues. This is mine:To explore full feature list of this plugin take a look at its change log site. There are some features in old plugin I miss though. For example I would appreciate some quick link or full description of the rule. Short description provided is sometimes not enough. I encourage you to take a look at full PMD rule description if you are not sure what’s source of the problem. You will learn a lot about Java language itself or about libraries you are using. Quick links would help a lot in such case. Another behavior I miss is full list of PMD violations for current file.  Also some rules doesn’t use code highlighting (only side panel markers). It is sometimes hard to distinguish between compiler and PMD issues. This is problem for me because our team doesn’t use JavaDoc warnings but I do. So I get a lot of JavaDoc warnings from code written by teammates. And sometimes I can miss PMD issue because it is lost in JavaDoc warnings. (Fortunately SVN commit is rejected if I forget to fix some rule). Conclusion This plugin enhanced my Eclipse workflow. No more disruptions because of endless “Checking code…” processing by old plugin.Reference: eclipse-pmd – New PMD plugin for Eclipse from our JCG partner Lubos Krnac at the Lubos Krnac Java blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close