Featured FREE Whitepapers

What's New Here?

software-development-2-logo

SQL Tip of the Day: Be Wary of SELECT COUNT(*)

Recently, I’ve encountered this sort of query all over the place at a customer site:                     DECLARE v_var NUMBER(10); BEGIN SELECT COUNT(*) INTO v_var FROM table1 JOIN table2 ON table1.t1_id = table2.t1_id JOIN table3 ON table2.t2_id = table3.t2_id ... WHERE some_predicate;IF (v_var = 1) THEN do_something ELSE do_something_else END IF; END; Unfortunately, COUNT(*) is often the first solution that comes to mind when we want to check our relations for some predicate. But COUNT() is expensive, especially if all we’re doing is checking our relations for existence. Does the word ring a bell? Yes, we should use the EXISTS predicate, because if we don’t care about the exact number of records that return true for a given predicate, we shouldn’t go through the complete data set to actually count the exact number. The above PL/SQL block can be rewritten trivially to this one: DECLARE v_var NUMBER(10); BEGIN SELECT CASE WHEN EXISTS ( SELECT 1 FROM table1 JOIN table2 ON table1.t1_id = table2.t1_id JOIN table3 ON table2.t2_id = table3.t2_id ... WHERE some_predicate ) THEN 1 ELSE 0 END INTO v_var FROM dual;IF (v_var = 1) THEN do_something ELSE do_something_else END IF; END; Let’s measure! Query 1 yields this execution plan: ----------------------------------------------- | Id | Operation | E-Rows | A-Rows | ----------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1 | SORT AGGREGATE | 1 | 1 | |* 2 | HASH JOIN | 4 | 4 | |* 3 | TABLE ACCESS FULL| 2 | 2 | |* 4 | TABLE ACCESS FULL| 6 | 6 | ----------------------------------------------- Query 2 yields this execution plan: ---------------------------------------------- | Id | Operation | E-Rows | A-Rows | ---------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1 | NESTED LOOPS | 4 | 1 | |* 2 | TABLE ACCESS FULL| 2 | 1 | |* 3 | TABLE ACCESS FULL| 2 | 1 | | 4 | FAST DUAL | 1 | 1 | ---------------------------------------------- You can ignore the TABLE ACCESS FULL operations, the actual query was executed on a trivial database with no indexes. What’s essential, however, are the much improved E-Rows values (E = Estimated) and even more importantly the optimal A-Rows values (A = Actual). As you can see, the EXISTS predicate could be aborted early, as soon as the first record that matches the predicate is encountered – in this case immediately. See this post about more details of how to collect Oracle Execution plans Conclusion Whenever you encounter a COUNT(*) operation, you should ask yourself if it is really needed. Do you really need to know the exact number of records that match a predicate? Or are you already happy knowing that any record matches the predicate? Answer: It’s probably the latter.Reference: SQL Tip of the Day: Be Wary of SELECT COUNT(*) from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
java-logo

Java’s Volatile Modifier

A while ago I wrote a Java servlet Filter that loads configuration in its init function (based on a parameter from web.xml). The filter’s configuration is cached in a private field. I set the volatile modifier on the field. When I later checked the company Sonar to see if it found any warnings or issues in the code I was a bit surprised to learn that there was a violation on the use of volatile. The explanation read: Use of the keyword ‘volatile’ is generally used to fine tune a Java application, and therefore, requires a good expertise of the Java Memory Model. Moreover, its range of action is somewhat misknown. Therefore, the volatile keyword should not be used for maintenance purpose and portability. I would agree that volatile is misknown by many Java programmers. For some even unknown. Not only because it’s never used much in the first place, but also because it’s definition changed since Java 1.5. Let me get back to this Sonar violation in a bit and first explain what volatile means in Java 1.5 and up (until Java 1.8 at the time of writing). What is Volatile? While the volatile modifier itself comes from C, it has a completely different meaning in Java. This may not help in growing an understanding of it, googling for volatile could lead to different results. Let’s take a quick side step and see what volatile means in C first. In the C language the compiler ordinarily assumes that variables cannot change value by themselves. While this makes sense as default behavior, sometimes a variable may represent a location that can be changed (like a hardware register). Using a volatile variable instructs the compiler not to apply these optimizations. Back to Java. The meaning of volatile in C would be useless in Java. The JVM uses native libraries to interact with the OS and hardware. Further more, it is simply impossible to point Java variables to specific addresses, so variables actually won’t change value by themselves. However, the value of variables on the JVM can be changed by different threads. By default the compiler assumes that variables won’t change in other threads. Hence it can apply optimizations such as reordering memory operations and caching the variable in a CPU register. Using a volatile variable instructs the compiler not to apply these optimizations. This guarantees that a reading thread always reads the variable from memory (or from a shared cache), never from a local cache. Atomicity Further more on a 32 bit JVM volatile makes writes to a 64 bit variable atomic (like long or double). To write a variable the JVM instructs the CPU to write an operand to a position in memory. When using the 32 bit instruction set, what if the size of a variable is 64 bits? Obviously the variable must be written with two instructions, 32 bits at a time. In multi-threaded scenarios another thread may read the variable half way through the write. At that point only first half of the variable is written. This race-condition is prevented by volatile, effectively making writes to 64 bit variables atomic on 32 bit architectures. Note that above I talked about writes not updates. Using volatile won’t make updates atomic. E.g. ++i when i is volatile would read the value of i from the heap or L3 cache into a local register, inc that register, and write the register back into the shared location of i. In between reading and writing i it might be changed by another thread. Placing a lock around the read and write instructions makes the update atomic. Or better, use non-blocking instructions from the atomic variable classes in the concurrent.atomic package. Side Effect A volatile variable also has a side effect in memory visibility. Not just changes to the volatile variable are visible to other threads, but also any side effects of the code that led up to the change are visible when a thread reads a volatile variable. Or more formally, a volatile variable establishes a happens-before relationship with subsequent reads of that variable. I.e. from the perspective of memory visibility writing a volatile variable effectively is like exiting a synchronized block and reading a volatile variable like entering one. Choosing Volatile Back to my use of volatile to initialize a configuration once and cache it in a private field. Up to now I believe the best way to ensure visibility of this field to all threads is to use volatile. I could have used AtomicReference instead. Since the field is only written once (after construction, hence it cannot be final) atomic variables communicate the wrong intent. I don’t want to make updates atomic, I want to make the cache visible to all threads. And for what it’s worth, the atomic classes use volatile too. Thoughts on this Sonar Rule Now that we’ve seen what volatile means in Java, let’s talk a bit more about this Sonar rule. In my opinion this rule is one of the flaws in configurations of tools like Sonar. Using volatile can be a really good thing to do, if you need shared (mutable) state across threads. Sure thing you must keep this to a minimum. But the consequence of this rule is that people who don’t understand what volatile is follow the recommendation to not use volatile. If they remove the modifier effectively they introduce a race-condition. I do think it’s a good idea to automatically raise red flags when misknown or dangerous language features are used. But maybe this is only a good idea when there are better alternatives to solve the same line of problems. In this case, volatile has no such alternative. Note that in no way this is intended as a rant against Sonar. However I do think that people should select a set of rules that they find important to apply, rather than embracing default configurations. I find the idea to use rules that are enabled by default, a bit naive. There’s an extremely high probability that your project is not the one that tool maintainers had in mind when picking their standard configuration. Furthermore I believe that as you encounter a language feature that you don’t know, you should learn about it. As you learn about it you can decide if there are better alternatives. Java Concurrency in Practice The de facto standard book about concurrency in the JVM is Java Concurrency in Practice by Brain Goetz. It explains the various aspects of concurrency in several levels of detail. If you use any form of concurrency in Java (or impure Scala) make sure you at least read the former three chapters of this brilliant book to get a decent high-level understanding of the matter.Reference: Java’s Volatile Modifier from our JCG partner Bart Bakker at the Software Craft blog....
software-development-2-logo

Are You Using SQL PIVOT Yet? You Should!

Every once in a while, we run into these rare SQL issues where we’d like to do something that seems out of the ordinary. One of these things is pivoting rows to columns. A recent question on Stack Overflow by Valiante asked for precisely this. Going from this table:             +------+------------+----------------+-------------------+ | dnId | propNameId | propertyName | propertyValue | +------+------------+----------------+-------------------+ | 1 | 10 | objectsid | S-1-5-32-548 | | 1 | 19 | _objectclass | group | | 1 | 80 | cn | Account Operators | | 1 | 82 | samaccountname | Account Operators | | 1 | 85 | name | Account Operators | | 2 | 10 | objectsid | S-1-5-32-544 | | 2 | 19 | _objectclass | group | | 2 | 80 | cn | Administrators | | 2 | 82 | samaccountname | Administrators | | 2 | 85 | name | Administrators | | 3 | 10 | objectsid | S-1-5-32-551 | | 3 | 19 | _objectclass | group | | 3 | 80 | cn | Backup Operators | | 3 | 82 | samaccountname | Backup Operators | | 3 | 85 | name | Backup Operators | +------+------------+----------------+-------------------+ … we’d like to transform rows into colums as such: +------+--------------+--------------+-------------------+-------------------+-------------------+ | dnId | objectsid | _objectclass | cn | samaccountname | name | +------+--------------+--------------+-------------------+-------------------+-------------------+ | 1 | S-1-5-32-548 | group | Account Operators | Account Operators | Account Operators | | 2 | S-1-5-32-544 | group | Administrators | Administrators | Administrators | | 3 | S-1-5-32-551 | group | Backup Operators | Backup Operators | Backup Operators | +------+--------------+--------------+-------------------+-------------------+-------------------+ The idea is that we only want one row per distinct dnId, and then we’d like to transform the property-name-value pairs into columns, one column per property name. Using Oracle or SQL Server PIVOT The above transformation is actually quite easy with Oracle and SQL Server, which both support the PIVOT keyword on table expressions. Here is how the desired result can be produced with SQL Server: SELECT p.* FROM ( SELECT dnId, propertyName, propertyValue FROM myTable ) AS t PIVOT( MAX(propertyValue) FOR propertyName IN ( objectsid, _objectclass, cn, samaccountname, name ) ) AS p; (SQLFiddle here) And the same query with a slightly different syntax in Oracle: SELECT p.* FROM ( SELECT dnId, propertyName, propertyValue FROM myTable ) t PIVOT( MAX(propertyValue) FOR propertyName IN ( 'objectsid' as "objectsid", '_objectclass' as "_objectclass", 'cn' as "cn", 'samaccountname' as "samaccountname", 'name' as "name" ) ) p; (SQLFiddle here) How does it work? It is important to understand that PIVOT (much like JOIN) is a keyword that is applied to a table reference in order to transform it. In the above example, we’re essentially transforming the derived table t to form the pivot table p. We could take this further and join p to another derived table as so: SELECT * FROM ( SELECT dnId, propertyName, propertyValue FROM myTable ) t PIVOT( MAX(propertyValue) FOR propertyName IN ( 'objectsid' as "objectsid", '_objectclass' as "_objectclass", 'cn' as "cn", 'samaccountname' as "samaccountname", 'name' as "name" ) ) p JOIN ( SELECT dnId, COUNT(*) availableAttributes FROM myTable GROUP BY dnId ) q USING (dnId); The above query will now allow for finding those rows for which there isn’t a name / value pair in every column. Let’s assume we remove one of the entries from the original table, the above query might now return: | DNID | OBJECTSID | _OBJECTCLASS | CN | SAMACCOUNTNAME | NAME | AVAILABLEATTRIBUTES | |------|--------------|--------------|-------------------|-------------------|-------------------|---------------------| | 1 | S-1-5-32-548 | group | Account Operators | Account Operators | Account Operators | 5 | | 2 | S-1-5-32-544 | group | Administrators | (null) | Administrators | 4 | | 3 | S-1-5-32-551 | group | Backup Operators | Backup Operators | Backup Operators | 5 | jOOQ also supports the SQL PIVOT clause through its API. What if I don’t have PIVOT? In simple PIVOT scenarios, users of other databases than Oracle or SQL Server can write an equivalent query that uses GROUP BY and MAX(CASE ...) expressions as documented in this answer here.Reference: Are You Using SQL PIVOT Yet? You Should! from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
java-logo

Default Methods: Java 8’s Unsung Heros

A few weeks ago I wrote a blog saying that developers learn new languages because they’re cool. I still stand by this assertion because the thing about Java 8 is it’s really cool. Whilst the undoubted star of the show is the addition of Lambdas and the promotion of functions to first class variables, my current favourite is default methods. This is because they’re such a neat way of adding new functionality to existing interfaces without breaking old code. The implementation is simple: take an interface, add a concrete method and attach the keyword default as a modifier. The result is that suddenly all existing implementations of your interface can use this code. In this first, simple example, I’ve added default method that returns the version number of an interface1. public interface Version {   /**    * Normal method - any old interface method:    *    * @return Return the implementing class's version    */   public String version();   /**    * Default method example.    *    * @return Return the version of this interface    */   default String interfaceVersion() {     return "1.0";   } } You can then call this method on any implementing class. public class VersionImpl implements Version {   @Override   public String version() {     return "My Version Impl";   } } You may ask: why is this cool? If you take the java.lang.Iterable interface and add the following default method you get the death of the for loop.   default void forEach(Consumer<? super T> action) {     Objects.requireNonNull(action);     for (T t : this) {       action.accept(t);     }   } The forEach method takes an instance of a class that implements the Consumer<T> interface as an argument. The Consumer<T> can be found in the new java.util.function package and is what Java 8 calls a functional interface, which is an interface containing only one method. In this case it’s the method accept(T t) that takes one argument and has a void return. The java.util.function package is probably one of the most important packages in Java 8. It contains a whole bunch of single method, or functional, interfaces that describe common function types. For example, Consumer<T> contains a function that takes one argument and has a void return, whilst Predicate<T> is an interface with a function that takes one argument and returns a boolean, which is generally used to write filtering lambdas. The implementation of this interface should contain whatever it is that you previously wrote between your for loops brackets. So what, you may think, what does that give me? If this wasn’t Java 8 then the answer is “not much”. To use the forEach(…) method pre Java 8 you’d need to write something like this:     List<String> list = Arrays.asList(new String[] { "A", "FirsT", "DefaulT", "LisT" });     System.out.println("Java 6 version - anonymous class");     Consumer<String> consumer = new Consumer<String>() {       @Override       public void accept(String t) {         System.out.println(t);       }     };     list.forEach(consumer); But, if you combine this with lambda expressions or method references you get the ability to write some really cool looking code. Using a method reference, the previous example becomes:     list.forEach(System.out::println); You can do the same thing with a lambda expression:     list.forEach((t) -> System.out.println(t)); All this seems to be in keeping with one of the big ideas behind Java 8: let the JDK do the work for you. To paraphrase statesman and serial philanderer John F Kennedy “ask not what you can do with your JDK ask what your JDK can do for you”2. Design Problems of Default Methods That’s the new cool way of writing the ubiquitous for loop, but are there are problems with adding default methods to interfaces and if so, what are they and how did the guys on the Java 8 project fix them? The first one to consider is inheritance. What happens when you have an interface which extends another interface and both have a default method with the same signature? For example, what happens if you have SuperInterface extended by MiddleInterface and MiddleInterface extended by SubInterface? public interface SuperInterface {   default void printName() {     System.out.println("SUPERINTERFACE");   } } public interface MiddleInterface extends SuperInterface {   @Override   default void printName() {     System.out.println("MIDDLEINTERFACE");   } } public interface SubInterface extends MiddleInterface {   @Override   default void printName() {     System.out.println("SUBINTERFACE");   } } public class Implementation implements SubInterface {   public void anyOldMethod() {     // Do something here   }   public static void main(String[] args) {     SubInterface sub = new Implementation();     sub.printName();     MiddleInterface middle = new Implementation();     middle.printName();     SuperInterface sup = new Implementation();     sup.printName();   } } No matter which way you cut it, printName() will always print “SUBINTERFACE”. The same question arises when you have a class and an interface containing the same method signature: which method is run? The answer is the ‘class wins’ rule. Interface default methods will always be ignored in favour of class methods. public interface AnyInterface {   default String someMethod() {     return "This is the interface";   } } public class AnyClass implements AnyInterface {   @Override   public String someMethod() {     return "This is the class - WINNING";   } } Running the code above will always print out: “This is the class – WINNING” Finally, what happens if a class implements two interfaces and both contain methods with the same signature? This is the age-old C++ diamond problem; how do you solve the ambiguity? Which method is run? public interface SuperInterface {   default void printName() {     System.out.println("SUPERINTERFACE");   } } public interface AnotherSuperInterface {   default void printName() {     System.out.println("ANOTHERSUPERINTERFACE");   } } In Java 8’s case the answer is neither. If you try to implement both interfaces, then you’ll get the following error: Duplicate default methods named printName with the parameters () and () are inherited from the types AnotherSuperInterface and SuperInterface. In the case where you absolutely MUST implement both interfaces, then the solution is to invoke the ‘class wins’ rule and override the ambiguous method in your implementation. public class Diamond implements SuperInterface, AnotherSuperInterface {   /** Added to resolve ambiguity */   @Override   public void printName() {     System.out.println("CLASS WINS");   }   public static void main(String[] args) {     Diamond instance = new Diamond();     instance.printName();   } } When to Use Default Methods From a purist point of view the addition of default methods means that Java interfaces are no longer interfaces. Interfaces were designed as a specification or contract for proposed/intended behaviour: a contract that the implementing class MUST fulfil. Adding default methods means that there is virtually no difference between interfaces and abstract base classes3. This means that they’re open to abuse as some inexperienced developers may think it cool to rip out base classes from their codebase and replace them with default method based interfaces – just because they can, whilst others may simply confuse abstract classes with interfaces implementing default methods. I’d currently suggest using default methods solely for their intended use case: evolving legacy interfaces without breaking existing code. Though I may change my mind.   1It’s not very useful, but it demonstrates a point… 2 John F Kennedy inauguration speech January 20th 1961. 3 Abstract base classes can have a constructor whilst interfaces can’t. Classes can have private instance variables (i.e. state); interfaces can’t.Reference: Default Methods: Java 8’s Unsung Heros from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
javafx-logo

Validation in java (javafx)

Validation is one thing that’s missing from the core javafx framework. To fill in this gap there is already a 3rd party validation library that’s present in controlsfx. However there’s one issue I have with it: it wasn’t created with FXML in mind. That’s not to say it isn’t a good library, it just misses this detail and for me this is a no go. Because of that I decided to create my own validation framework: FXValidation. How it works To show you how FXValidation works let’s start from the bottom up, by showing you an example of what an FXML file might look like when using this library. This is a simple example of a login screen where the user needs to enter both an user name and a password: <Label> <text>User Name:</text> </Label> <TextField fx:id="userName" id="userName"></TextField> <Label> <text>Password:</text> </Label> <PasswordField fx:id="password" id="password"></PasswordField><Button text="Submit" onAction="#submitPressed"></Button><fx:define> <RequiredField fx:id="requiredField1" > <srcControl> <fx:reference source="userName"></fx:reference> </srcControl> </RequiredField> <RequiredField fx:id="requiredField2" > <srcControl> <fx:reference source="password"></fx:reference> </srcControl> </RequiredField> </fx:define><ErrorLabel message="Please enter your username"> <validator> <fx:reference source="requiredField1"></fx:reference> </validator> </ErrorLabel> <ErrorLabel message="Please enter your password"> <validator> <fx:reference source="requiredField2"></fx:reference> </validator> </ErrorLabel> On the beginning of the FXML snippet I define a textfield and password field for entering the login details. Other than that there’s also a submit button so the user may send the login information to the system. After that comes the interesting part. First we define a couple of validators of type RequiredField. This validators, check whether the input in question is empty and if so they store that the validation has errors in a flag. There’s also other types of validators built-in the FXValidation framework but we’ll get to that in a bit. Finally we define a couple of ErrorLabels. This are nodes that implement IValidationDisplay, any class that implements this interface is a class whose purpose is to display information to the user whenever there is an error in the validation process. Currently there is only one of this classes in the framework: the ErrorLabel. Finally we need to call validation when the user clicks the submit button, this is done in the controller on the submit method: public void submitPressed(ActionEvent actionEvent) { requiredField1.eval(); requiredField2.eval(); } This will trigger validation for the validators that we have defined. If there are errors the ErrorLabels will display the error message that was defined in them. There’s also one extra thing the validators do: they add in the css style class of “error” to every control that is in error after the validation process has taken effect. This allows the programmer to style the controls differently using css whenever this controls have the error class appended to them. The programmer can check for errors in the validation process by checking the property hasErrors in the validators. And here’s our example in action:The details From what I’ve shown you above we can see that there are basically 2 types of classes involved:The validator: takes care of checking if the target control (srcControl) conforms to the validation rule. If not it appends the “error” style class to target control sets its hasErrors property to true. All validators extend from ValidatorBase. The error display information: this takes care of informing the user what went wrong with the validation, it might be that the field is required, the fields content doesn’t have the necessary number of characters, etc. All this classes implement IValidationDisplay.In the library there are currenctly 3 validators and only one error “displayer” which is ErrorLabel. The validators are the following:RequiredField: Checks whether the target control (srcControl) has content, if it doesn’t it gives an error. CardinalityValidator: Checks whether the target control (srcControl) has at least a min number of characters and a maximum of max number of characters. RegexValidator: Checks the content of the target control (srcControl) against a given regular expressionAnd that’s it.Reference: Validation in java (javafx) from our JCG partner Pedro Duque Vieira at the Pixel Duke blog....
software-development-2-logo

INTEL Perceptual Computing – RealSense Challenge 2014

           Perceptual Computing technology is redefining the boundaries between human and computer interaction. Intel invites you to claim your share of history by designing new, leading edge perceptual computing Apps. RealSense Challenge 2014 is a new contest in which developers are challenged to design perceptual computing apps. At the heart of this competition, the new Intel RealSense 3D camera and SDK allow to interact with computer by supporting hand/finger tracking, facial analysis, speech recognition, background subtraction and augmented reality.  RealSense Challenge 2014 The competition has two phases: Ideation and Development. The ideation phase will be opened until the end of September, all you are asked to do is to submit your ideas (as an individual or as a team) and try to be within the 1300 participants who will be invited to turn their ideas into working demos. Everyone participating to the development phase will be loaned the Intel 3D camera and RealSense SDK for C/C++ development. There are also two tracks for this challenge. The Pioneer track is open to all developers from around the world whereas the Ambassador track is only open to developers who submitted a demo to one of Intel Perceptual Computing Challenge 2013 or to its Ultimate Coder Challenge. Up to 1000 Pioneers and 300 Ambassadors will be chosen to move forward on the Development phase. Both contest tracks will accept entries from participants in the following Innovation categories :Gaming + Play Learning Entertainment Interact naturally Collaboration/Creation Open innovationThere are $1 Million cash prizes to be shared by the Pioneer and Ambassador groups. Each track will compete independantly though.  PioneerAmbassadorGRAND PRIZE (1) $25,000 One overall winner chosen from the first place winners of each category will win an additional $25,000 cash prize.GRAND PRIZE (1) $50,000 One overall winner chosen from the first place winners of each category will win an additional $50,000 cash prize.FIRST PLACE (5) $25,000 The top scoring demo in each category will win a $25,000 cash prizeFIRST PLACE (5) $50,000 The top scoring demo in each category will win a $50,000 cash prizeSECOND PLACE (10) $10,000 Two demos from each of the 5 categories will receive a $10,000 cash prizeSECOND PLACE (10) $20,000 Two demos from each of the 5 categories will receive a $20,000 cash prizeEARLY SUBMISSION (50) $1,000 The top scoring demos, submitted prior to the Early submission deadline, across all 5 categories will each receive a cash prize of $1,000EARLY SUBMISSION (30) $1,000 The top scoring demos, submitted prior to the Early submission deadline, across all 5 categories will each receive a cash prize of $1,000HASWELL NUC (250) The top 250 scoring demos from Phase 1, across all 5 categories, will receive a Haswell NUC device valued at nearly $600.HASWELL NUC (50) The top 50 scoring demos from Phase 1, across all 5 categories, will receive a Haswell NUC device valued at nearly $600.  If you are a Pioneer, you can sign up today and have until October 1, 2014 to submit your idea. If you are an Ambassador, you can simply go to the challenge page and sign-in with the email address used for the 2013 competition. More info and subscription on the RealSense Challenge 2014 page On the side, Intel organizes two webinars for  developers to get inspired and to learn more about natural user interface and RealSense technology : Webinar 1 : Learn about gesture recognition technology, on technical side – August 13th 2014 1pm Eastern  Webinar 2 : Wide variety of usages for natural user interface – August 20th 2014 1pm Eastern...
software-development-2-logo

What DSLs are not for

Domain specific languages are special programming languages. Each fits some special “domain” and makes the business code simpler. Using a DSL the business level problem can be implemented higher level and therefore the resulting code is simpler, it is created faster, presumably contains less errors. Some DSLs in some areas make it even possible to develop business functionality by the domain experts who have limited programming experience. There are many great books on DSLs Martin Fowler’s one being at least one of, if not the best of the topic. Many times the decision to use DSL is to shorten release cycles. A mature software in a rapidly changing business domain may change frequently but many times the change is small. If it requires the change of the code then the whole release cycle is to be repeated. Code is modified, unit tested, release candidate is created, QA tests the new version and finally the release is ready after weeks the new business need arose. The obvious approach is to embed some DSL into the application and develop some business function that is likely to be changed in the future in this DSL. The “script” written in DSL may not be part of the real release and therefore the change can go through the system faster. Developers have less obvious coding, which developers usually do not like, business is happy getting the modified functions faster. Right? WRONG! But not so obviously at the first time, perhaps. The DSL functions fine, the new behavior is delivered faster and there is no problem. Some time later, however, there come a new feature that can not be implemented in the DSL and needs the change of the code application code. Why not extend the DSL and implement the new functionality in the new version of the DSL? This approach is very lucrative but it is very dangerous. DSL are like alcohol. They can have a purpose and can serve good. A cup of quality wine after a nice summer evening supper should not harm. Too much of it regularly will ruin your life. A DSL that has too many features may be dangerous. Some may use it for the good, but there is a possibility for abuse. The release process was examined and engineered when the DSL was introduced but may not be reviewed as the DSL became more and more powerful and suddenly you may face a situation when new features are developed into the software out of the release cycle. At some point the release process and the most crucial part of it, quality assurance may be ruined. DSL should be simple. Modification of the application scripting should also follow some release management. There may not be release management at all. I have heard of software projects where the software was released to public without any significant testing. If there was an error, the users complained about it and a new release came out an hour later. Fixing one bug, creating a new one. No problem if the business can stand that. The actual software was a facebook like application where new feature was more important for the users than uninterrupted use. Other applications in telecom, banking should be tested a bit more rigorous. Regulation may even demand all releases to be archived. In that case scripting out of the release cycle is out of question. And there may be something in the middle. Some part, some features of the application may need strict release management, while other may not demand that. Some part can be scripted using some DSL, other core functions need strong QA and release management. Some features may mix the both: scripted and still part of the cycle. The important message is: Application scripting in DSL does not ease release management and/or QA. If the release management cycle can be releases for some part of the application feature, DSL may be a tool to aid that, but DSL is never the reason.Reference: What DSLs are not for from our JCG partner Peter Verhas at the Java Deep blog....
jetbrains-intellijidea-logo

Using IntelliJ bookmarks

This is a quick post about IntelliJ’s nice bookmark feature. IntelliJ gives you the option to bookmark single lines of code. After a line has been bookmarked, you can use various ways to jump directly back to this line. So it can be a good idea to bookmarks code locations you often work with. To create a new bookmark you only have to press F11 inside the code editor. Bookmarked lines show a small checkmark next to the line number.        Bookmarks can be removed by selecting the bookmarked line and pressing F11 again. To see all bookmarks you can press Shift – F11. This opens a small popup window which shows a list of all bookmarks you have created.Note that this window can be completely controlled by using the keyboard:With Up / Down you can browse the list of bookmarks With Enter you jump to the selected bookmark Esc closes the window A bookmark can be moved up or down using Alt – Up / Alt – DownNote that you can also add a mnemonic identifier to a bookmark. You do this by selecting a line and pressing Ctrl – F11. This opens a small menu in which you can choose a mnemonic identifier (which is a character or a number).You can choose an identifier by clicking on one of the menu buttons or by simply pressing the corresponding key on your keyboard. Bookmark mnemonics are also shown next to the line number. In the following image 1 was choosen as mnemonic.Mnemonics give you the option to move even quicker between bookmarks. You can directly jump to a mnemonic bookmark by opening the bookmark popup (Shift – F11) and pressing the mnemonic key (1 in this example). For numerical bookmarks even more shortcuts are available. You can toggle a numeric mnemonic on a selected line by pressing Ctrl – Shift – <number>. If you want to jump to a numeric mnemonic you use the Ctrl – <number> shortcut. For example: Ctrl – 5 brings you directly to the mnemonic bookmark 5. Note that bookmarks are also shown in the Favorites view. So if you like clicking with your mouse, this is for you!Reference: Using IntelliJ bookmarks from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....
jboss-hibernate-logo

A beginner’s guide to JPA/Hibernate flush strategies

Introduction In my previous post I introduced the entity state transitions Object-relational mapping paradigm. All managed entity state transitions are translated to associated database statements when the current Persistence Context gets flushed. Hibernate’s flush behavior is not always as obvious as one might think.       Write-behind Hibernate tries to defer the Persistence Context flushing up until the last possible moment. This strategy has been traditionally known as transactional write-behind. The write-behind is more related to Hibernate flushing rather than any logical or physical transaction. During a transaction, the flush may occur multiple times. The flushed changes are visible only for the current database transaction. Until the current transaction is committed, no change is visible by other concurrent transactions. The persistence context, also known as the first level cache, acts as a buffer between the current entity state transitions and the database. In caching theory, the write-behind synchronization requires that all changes happen against the cache, whose responsibility is to eventually synchronize with the backing store. Reducing lock contention Every DML statement runs inside a database transaction. Based on the current database transaction isolation level, locks (shared or explicit) may be acquired for the current selected/modified table rows. Reducing the lock holding holding time lowers the dead-lock probability, and according to the scalability theory, it increases throughput. Locks always introduce serial executions, and according to Amdahl’s law, the maximum speedup is inversely proportional with the serial part of the currently executing program. Even in READ_COMMITTED isolation level, UPDATE and DELETE statements acquire locks. This behavior prevents other concurring transactions from reading uncommitted changes or modify the rows in question. So, deferring locking statements (UPDATE/DELETE) may increase performance, but we must make sure that data consistency is not affected whatsoever. Batching Postponing the entity state transition synchronization has another major advantage. Since all changes are being flushed at once, Hibernate may benefit from the JDBC batching optimization. Batching improves performance by grouping multiple DML statements into a single operation, therefore reducing database round-trips. Read-your-own-writes consistency Since queries are always running against the database (unless second level query cache is being hit), we need to make sure that all pending changes are synchronized before the query starts running. Therefore, both JPA and Hibernate define a flush-before-query synchronization strategy. From JPA to Hibernate flushing strategiesJPA FlushModeType Hibernate FlushMode Hibernate implementation detailsAUTO AUTO The Session is sometimes flushed before query execution.COMMIT COMMIT The Session is only flushed prior to a transaction commit.ALWAYS The Session is always flushed before query execution.MANUAL The Session can only be manually flushed.NEVERDeprecated. Use MANUAL instead. This was the original name given to manual flushing, but it was misleading users into thinking that the Session won’t ever be flushed.  Current Flush scope The Persistence Context defines a default flush mode, that can be overridden upon Hibernate Session creation. Queries can also take a flush strategy, therefore overruling the current Persistence Context flush mode.Scope Hibernate JPAPersistence Context Session EntityManagerQuery Query Criteria Query TypedQuery  Stay tuned In my next post, you’ll find out that Hibernate FlushMode.AUTO breaks data consistency for SQL queries and you’ll see how you can overcome this shortcoming.Reference: A beginner’s guide to JPA/Hibernate flush strategies from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
software-development-2-logo

Feature Toggles are one of the worst kinds of Technical Debt

Feature flags or config flags aka feature toggles aka flippers are an important part of Devops practices like dark launching (releasing features immediately and incrementally), A/B testing, and branching in code or branching by abstraction (so that development teams can all work together directly on the code mainline instead of creating separate feature branches). Feature toggles can be simple Boolean switches or complex decision trees with multiple different paths. Martin Fowler differentiates between release toggles (which are used by development and ops to temporarily hide incomplete or risky features from all or part of the user base) and business toggles to control what features are available to different users (which may have a longer – even permanent – life). He suggests that these different kinds of flags should be managed separately, in different configuration files for example. But the basic idea is the same, to build conditional branches into mainline code in order to make logic available only to some users or to skip or hide logic at run-time, including code that isn’t complete (the case for branching by abstraction). Using run-time flags like this isn’t a new idea, certainly not invented at Flickr or Facebook. Using flags and conditional statements to offer different experiences to different users or to turn on code incrementally is something that many people have been practicing for a long time. And doing this in mainline code to avoid branching is in many ways a step back to the way that people built software 20+ years ago when we didn’t have reliable and easy to use code management systems. Advantages and Problems of Feature Flags Still, there are advantages to developers working this way, making merge problems go away, and eliminating the costs of maintaining and supporting long-lived branches. And carefully using feature flags can help you to reduce deployment risk through canary releases or other incremental release strategies, where you make the new code active for only some users or customers, or only on some systems, and closely check before releasing progressively to the rest of the user base – and turn off the new code if you run into problems. All of this makes it easier to get new code out faster for testing and feedback. But using feature flags creates new problems of its own. The plumbing and scaffolding logic to support branching in code becomes a nasty form of technical debt, from the moment each feature switch is introduced. Feature flags make the code more fragile and brittle, harder to test, harder to understand and maintain, harder to support, and less secure. Feature Flags need to be Short Lived Abhishek Tiwari does a good job of explaining feature toggles and how they should be used. He makes it clear that they should only be a temporary deployment/release management tool, and describes a disciplined lifecycle that all feature toggles need to follow, from when they are created by development, then turned on by operations, updated if any problems or feedback come up, and finally retired and removed when no longer needed.Feature toggles require a robust engineering process, solid technical design and a mature toggle life-cycle management. Without these 3 key considerations, use of feature toggles can be counter-productive. Remember the main purpose of toggles is to perform release with minimum risk, once release is complete toggles need to be removed.Feature Flags are Technical Debt – as soon as you add them Like other sources of technical debt, feature flags are cheap and easy to add in the short term. But the longer that they are left in the code, the more that they will end up costing you. Release toggles are supposed to make it easier and safer to push code out. You can push code out only to a limited number of users to start, reducing the impact of problems, or dark launch features incrementally, carefully assessing added performance costs as you turn on some of the logic behind the scenes, or run functions in parallel. And you can roll-back quickly by turning off features or optional behaviour if something goes wrong or if the system comes under too much load. But as you add options, it can get harder to support and debug the system, keeping track of which flags are in which state in production and test can make it harder to understand and duplicate problems. And there are dangers in releasing code that is not completely implemented, especially if you are following branching by abstraction and checking in work-in-progress code protected by a feature flag. If the scaffolding code isn’t implemented correctly you could accidentally expose some of this code at run-time with unpredictable results.…visible or not, you are still deploying code into production that you know for a fact to be buggy, untested, incomplete and quite possibly incompatible with your live data. Your if statements and configuration settings are themselves code which is subject to bugs – and furthermore can only be tested in production. They are also a lot of effort to maintain, making it all too easy to fat-finger something. Accidental exposure is a massive risk that could all too easily result in security vulnerabilities, data corruption or loss of trade secrets. Your features may not be as isolated from each other as you thought you were, and you may end up deploying bugs to your production environment” James McKay The support dangers of using – or misusing – feature flags was illustrated by a recent high-profile business failure at a major financial institution. The team used feature flags to contain operational risk when they introduced a new application feature. Unfortunately, they re-purposed a flag which was used by old code (code left in the system even though it hadn’t been used in years). Due to some operational mistakes in deployment, not all of the servers were successfully updated with the new code, and when the flag was turned on, old code and new code started to run on different computers at the same time doing completely different things with wildly inconsistent and, ultimately business-ending results. By the time that the team figured out what was going wrong, the company had lost millions of $. As more flags get added, testing of the application becomes harder and more expensive, and can lead to an explosion of combinations: If a is on and b is off and c is on and d is off then… what is supposed to happen? Fowler says that you only need to test the combinations which should reasonably be expected to happen in production, but this demands that everyone involved clearly understand what options could and should be used together – as more flags get added, this gets harder to understand and verify. And other testing needs to be done to make sure that switches can be turned on and off safely at run-time, and that features are completely and safely encapsulated by the flag settings and that behaviour doesn’t leak out by accident (especially if you are branching in code and releasing work-in-progress code). You also need to test to make sure that the structural changes to introduce the feature toggle do not introduce any regressions, all adding to testing costs and risks. More feature flags also make it harder to understand how and where to make fixes or changes, especially when you are dealing with long-lived flags and nested options. And using feature switches can make the system less secure, especially if you are hiding access to features in the UI. Adding a feature can make the attack surface of the application bigger, and hiding features at the UI level (for dark launching) won’t hide these features from bad guys. Use Feature Flags with Caution Feature flags are a convenient and flexible way to manage code, and can help you to get changes and fixes out to production more quickly. But if you are going to use flags, do so responsibly:Minimize your use of feature flags for release management, and make the implementation as simple as possible. Martin Fowler explains that it is important to minimize conditional logic to the UI and to entry points in the system. He also emphasises that:Release toggles are a useful technique and lots of teams use them. However they should be your last choice when you’re dealing with putting features into production. Your first choice should be to break the feature down so you can safely introduce parts of the feature into the product. The advantages of doing this are the same ones as any strategy based on small, frequent releases. You reduce the risk of things going wrong and you get valuable feedback on how users actually use the feature that will improve the enhancements you make later.Review flags often, make sure that you know which flags are on and which are supposed to be on and when features are going to be removed. Create dashboards (so that everyone can easily see the configuration) and health checks – run-time assertions – to make sure that important flags are on or off as appropriate. Once a feature is part of mainline, be ruthless about getting it out of the code base as soon as it isn’t used or needed any more. This means carefully cleaning up the feature flags and all of the code involved, and testing again to make sure that you didn’t break anything when you did this. Don’t leave code in the mainline just in case you might need it again some day. You can always go back and retrieve it from version control if you need to. Recognize and account for the costs of using feature flags, especially long-lived business logic branching in code.Feature toggles start off simple and easy. They provide you with new options to get changes out faster, and can help reduce the risk of deployment in the short term. But the costs and risks of relying on them too much can add up, especially over the longer term.Reference: Feature Toggles are one of the worst kinds of Technical Debt from our JCG partner Jim Bird at the Building Real Software blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close