Featured FREE Whitepapers

What's New Here?

java-logo

Difference between getPath(), getCanonicalPath() and getAbsolutePath() of File in Java

The File API is very important one in Java, since it gives access of File system to Java programs. Though Java’s file API is rich, there are lot of subtleties to know when you use them. One of the common query programmer’s has about file path is difference between getPath(), getCanonicalPath() and getAbsolutePath() methods, why there are three methods to get file path and what happens if you call getPath() in place of getCanonicalPath(). By the way, before understanding difference between getPath(), getAbsolutePath() and getCanonicalPath() let’s understand the concept behind these methods, i.e. difference between path, absolute path, and canonical path. In general, a path is way to get to a particular file or directory in a file system, it can be absolute (also known as full path) or relative e.g. relative to current location. Absolute path defines path from root of the file system e.g. C:\\ or D:\\ in Windows and from / in UNIX based operating systems e.g. Linux or Solaris. Canonical path is little bit tricky, because all canonical path is absolute, but vice-versa is not true. It actually defines a unique absolute path to the file from root of the file system. For example, C://temp/names.txt is a canonical path to names.txt in Windows, and /home/javinpaul/test/names.txt is canonical path in Linux. On the other hand, there can be many absolute paths to the same file, including the canonical path which has just seen. For example another absolute path to the same file  in Windows can be C://temp/./names.txt; similarly in UNIX/home/javinpaul/test/./names.txt is another absolute path to the same file. So you can say that, absolute path may contain meta characters like. and.. to represent current and parent directory. In rest of this article, we will learn difference between getPath(), getAbsolutePath() and getCanonical() Path by looking at values it return for a particular file. What is Absolute, Relative and Canonical Path You often hear the term, absolute, canonical and relative path while dealing with files in UNIX, Windows, Linux or any file system. These are three common ways to reference any particular file in a script or program. If you are a programmer, writing a script then you know how using absolute path can make your script rigid and in-flexible, infact using absolute path, infamously known as hard-coding path in script is one of the bad coding practice in programmer’s dictionary. An absolute path is complete path to a particular file such as C:\temp\abc.txt. The definition of absolute pathname is also system dependent. On UNIX systems, a pathname is absolute if its prefix is “/”. On Win32 systems, a pathname is absolute if its prefix is a drive specifier followed by “\\”, or if its prefix is “\\”. For example, we have two directories: temp and temp1 and test.txt file is in temp directory. C:\temp C:\temp1 In Java under Windows, you may have the following possible absolute paths that refer to the same file test.txt. C:\temp\test.txt C:\temp\test.txt C:\temp\TEST.TXT C:\temp\.\test.txt C:\temp1\..\temp\test.txt On the other hand, relative path is relative to the directory you are in, known as current directory. So if you are in the above directory, then if you reference file test.txt as relative, it assumes the same directory you are in. When you do../ then it goes back one directory, also known as parent directory. Canonical paths are a bit harder. For starters, all canonical paths are absolute (but not all absolute paths are canonical). A single file existing on a system can have many different paths that refer to it, but only one canonical path. Canonical gives a unique absolute path for a given file. The details of how this is achieved are probably system-dependent. For the above example, we have one and only one canonical path: C:\temp\test.txt, Remember in Java you can UNIX style forward slash (/) use path separator or you can even get operating systems path separator using file.separator system property, a key to write truly platform independent Java application. Difference between getPath(), getAbsolutePath() and getCanonicalPath() in Java Once you understand difference between absolute, canonical and relative path, it would be very easy to differentiate between these three methods, because they actually return path, absolute and canonical path. In short, here is key difference between them:The first method, getPath()  return a String which denotes the path that is used to create associated File object, and it may be relative to current directory. The second method, getAbsolutePath() returns the path string after resolving it against the current directory if it’s relative, resulting in a fully qualified path. The third method, getCanonicalPath() returns the path string after resolving any relative path against current directory, and removes any relative path element e.g. (. and ..), and any file system links to return a path which the file system considers the canonical means to reference the file system object to which it points.Also remember that, each of the above two methods has a File equivalent which returns the corresponding File object e.g. getAbsoluteFile() and getCanonicalFile() which returns same thing. getPath() vs getAbsolutePath() vs getCanonicalPath() The following example shows how there can be many different paths (and absolute paths) to the same file, which all have the exact same canonical path. Thus canonical path is useful if you want to know if two different paths point to the same file or not. import java.io.File;/** * Java program to show difference between path, absolute path and canonical * path related to files in Java. File API provides three methods to * java.io.File class getPath(), getAbsolutePath() and getCanonicalPath() and * this program just explain what those method returns. * * @author Javin Paul */ public class PathDemo {public static void main(String args[]) { System.out.println("Path of the given file :"); File child = new File(".././Java.txt"); displayPath(child);File parent = child.getParentFile(); System.out.println("Path of the parent file :"); displayPath(parent); }public static void displayPath(File testFile) { System.out.println("path : " + testFile.getPath()); System.out.println("absolute path : " + testFile.getAbsolutePath());try { System.out.println("canonical path : " + testFile.getCanonicalPath()); } catch (Exception e) { e.printStackTrace(); } }}Output: Path of the given file : path : ..\.\Java.txt absolute path : C:\Users\WINDOWS 8\workspace\Demo\..\.\Java.txt canonical path : C:\Users\WINDOWS 8\workspace\Java.txtPath of the parent file : path : ..\. absolute path : C:\Users\WINDOWS 8\workspace\Demo\..\. canonical path : C:\Users\WINDOWS 8\workspace That’s all about difference between getPath(), getAbsolutePath() and getCanonicalPath() in Java. In the course, we have also learned difference between path, absolute path and canonical path. What you need to remember is that, getPath() gives you the path on which File object is created, which may or may not be relative; getAbsolutePath() gives an absolute path to the file; and getCanonicalPath() gives you the unique absolute path to the file. It’s worth noting that there can be a huge number of absolute paths that point to the same file, but only one canonical path.Reference: Difference between getPath(), getCanonicalPath() and getAbsolutePath() of File in Java from our JCG partner Javin Paul at the Javarevisited blog....
enterprise-java-logo

ADF: Popup, Dialog and Input Components

In this post I would like to focus on a very common use case when we have af:popup containing af:dialog with input components inside. There are a couple of pitfalls that we need to watch out for when implementing this use case. Let’s consider a simple example:             <af:popup id="p1" contentDelivery="lazyUncached">             <af:dialog id="d2" title="Dialog" >      <af:inputText value="#{TheBean.firstName}" label="First Name" id="it1"/>      <af:inputText value="#{TheBean.lastName}" label="Last Name" id="it2"/>   </af:dialog>   </af:popup> The most interesting thing here is the popup’s property contentDelivery which is set to lazyUncached. This prevents the popup from caching the submitted input values and forces it to get the values from the model on each request instead of using values from the cache. Let’s make the example a bit more complicated. In the lastName’s  setter we are going to throw an exception: public void setLastName(String lastName) throws Exception {            this.lastName = lastName;            throw new Exception("This last name is bad"); } So, obviously if we try to submit the dialog we’ll get the following:The input values can not be submitted to the model and they are going to be stored in the local values of the input components. These local values are not going to be cleaned up even if we press the Cancel button and these values will be used during the subsequence request. In order to prevent this behavior we have to set resetEditableValues property of the popup to whenCanceled. Like this: <af:popup id="p1" contentDelivery="lazyUncached"                   resetEditableValues="whenCanceled">  <af:dialog id="d2" title="Dialog" >      <af:inputText value="#{TheBean.firstName}" label="First Name" id="it1"/>      <af:inputText value="#{TheBean.lastName}" label="Last Name" id="it2"/>   </af:dialog>    </af:popup> Let’s consider an example of af:dialog with custom buttons: <af:popup id="p1" contentDelivery="lazyUncached"                   resetEditableValues="whenCanceled"                   binding="#{TheBean.popup}">  <af:dialog id="d2" title="Dialog" type="none">      <af:inputText value="#{TheBean.firstName}" label="First Name" id="it1"/>      <af:inputText value="#{TheBean.lastName}" label="Last Name" id="it2"/>      <f:facet name="buttonBar">         <af:panelGroupLayout layout="horizontal" id="pgl1">           <af:button text="Ok" id="b2"                      actionListener="#{TheBean.buttonActionListener}"/>           <af:button text="Cancel" id="b3" immediate="true"                      actionListener="#{TheBean.buttonActionListener}"/>         </af:panelGroupLayout>       </f:facet>  </af:dialog>    </af:popup> So, there are two custom buttons “Ok” and “Cancel” with the following actionListener: public void buttonActionListener(ActionEvent actionEvent) {     getPopup().hide(); } The resetEditableValues doesn’t work in this case and local values of the input components won’t be cleaned up when pressing the Cancel button. There are a couple of options to fix this issue. The first one is to add af:resetListener to the Cancel button:           <af:button text="Cancel" id="b3" immediate="true"                      actionListener="#{TheBean.buttonActionListener}">                <af:resetListener type="action"/>           </af:button> The second option is to cancel the popup instead of just hiding it in the Cancel button action listener:   <af:button text="Ok" id="b2"              actionListener="#{TheBean.buttonActionListener}"/>   <af:button text="Cancel" id="b3" immediate="true"              actionListener="#{TheBean.cancelButtonActionListener}"/> public void cancelButtonActionListener(ActionEvent actionEvent) {    getPopup().cancel(); }That’s it!Reference: ADF: Popup, Dialog and Input Components from our JCG partner Eugene Fedorenko at the ADF Practice blog....
software-development-2-logo

Strategies to migrate from an DAO library

In this post I will discuss several strategies to handle the following situation: You’re working on a legacy project that uses a company library with DBOs and DAOs for accessing the database. But the generator for this library is broken and you have to make changes to the DBOs and/or DAOs. Basic cases You will face the following three basic cases, when the database changes:   New entities If you have only to add new entities with unidirectional relationships to entities from the common library, everything is fine and in the most cases you don’t have to touch the library. This is also true for adding new queries to the DAOs, thanks to inheritance. Extending existing entities / Change methods of the DAOs Extending existing entities with new attributes may be a little bit more complicated, but is also possible without using the generator. Change the existing methods of a DAO is, on the other hand, a fairly simple case, you can use inheritance. Or introduce a new DAO that only provides the new method and use this DAO instead of the old one. Destructive changes The interesting case is, when you have destructive changes in the database, like removing columns or foreign keys or whole tables. Because the first and second case are – more or less – simple, I will focus only on strategies for the latter case Destructive changes. Strategies 1. Fix the generator This solution will not touch  the current infrastructure of the project. If it is worth to do this task dependson the generator you use (in my case I had a very old version of the Hibernate Reverse Engineering project) the database changes that will be introduced in the future (as far as I know are Postgres enums still a problem for Hibernate) the time you have to fix the generatorProsYou don’t touch the existing system In case of, no known breaking changes from the database point of view for the generator, the generator is working again Future changes can be applied fast and easilyConsYou stick to the old system You may add a constraint for the database developers (see postgres enums) It may take a long time to fix the generator2. Replace all DBOs This option will replace all existing DBOs from within the library with generated, but fully accessible, ones in your project. In case of Java you need to modify the jar archive for this approach to have it working without any potential class path issues. When a database table or a field that is used within a query is gone, you must also replace the existing DAO within the library with a new one in your project. ProsYou have the full control over the DBOs back While the database evolves you will also gain control over the DAOs back. You only have to touch the DAOs when it’s really necessaryConsYou may modify the library itself You change the existing system Named queries might be forgotten, while recreating the DBOs. In case of removed tables, you must also replace the DAOs. This might break the application, due to unknown side effects of the methods. To avoid this, it is necessary to have a good test-harness, so that you can discover, if things were broken.3. Replace the whole library The following two approaches have in common, that they will replace the whole library. The replacement of the DBOs needs to be done, as described in 2., but the approach for the DAOs differs between both methods. 3.1. Introduce a new framework I will describe this approach for Java and Spring Data JPA, but I guess in other languages it is also feasible. Spring Data comes with a very nice feature, that allows you just to specify the query you would like to execute as the method name of an interface. The method name must follow a specific format and language, like: List<Address> Address.findByCity(String city); Under the assumption, that the existing DAOs follows such a convention, you can introduce Spring Data JPA. You must now ‘only’ reverse engineer the method signatures and potentially the annotations of the existing DAOs and transfer them to interfaces. Finally you ‘just’ have to change method names where they might not match. ProsOld solution is removed completely and replaced with a well proven and evolvable one The work is done once and we don’t have a repeation of small steps Less to code, due to autogeneration of entities and, in most of the cases, only creating interface definitions.ConsIntrocing a new framework / technologie to stack of old – sometimes unmaintained –  frameworks Risk to break working queries3.2 Write everything by hand This approach goes the ususal way of implementing DAOs, manual and mostly without any framework magic. This approach is feasible when you have no formal language that can be automatically translated to a query or no framework like spring data exists for you language. ProsNo new framework / technology introduced Full control over the DAOs and DBOs No “magic” is introduced.ConsRisk to break working queries Depending on the amount of DAOs the effort to create them is high.4. Replace DBOs and DAOs only when and where necessary This option introduces a migration path, that will only create the work, that needs to be done, where and when it is needed. The process will be like this :Reengineer at least annotations, fields and methods from the affected classes Remove affected enties ,and if necessary, dao classes. Recreate affected entities and dao classes in the specific subpackes Add required fields, methods and annotations TestProsIn my particular case, the lowest risk Effort is only spend, when really necessary Depending on your language / framework, the effort is very lowConsThe library is not replaced In case of replacing a DAO same as for 3.1 and 3.2 The effort needs to be spend multiple times Depending on the language : library needs to be toched everytime when an entity needs to be migratedConclusion We saw several strategies of how to cope with an old library that provides data access, but was generated by an generator that isn’t working anymore. Pros and Cons of each approach reveals that there is no perfect solution and we must decide per case what are the best matching strategy. In my particular case we decided to use option number 4, because we have already planned to replace the application that uses such a library.Reference: Strategies to migrate from an DAO library from our JCG partner Peter Daum at the Coders Kitchen blog....
career-logo

But I’m negotiable

I review many emailed job applications each week that include a salary expectation, usually in the form of “seeking $X,000 per year“. Some continue with a phrase that has become trite, not to mention quite costly to job seekers everywhere. “but I’m negotiable” What these candidates are telling us is “I have a target number, but I want you to know in advance that I’m willing to accept less.“ This phrase is also a common response during live conversation with candidates, whether in speaking to me or in interviews with my clients. INTERVIEWER: What are your salary expectations? CANDIDATE: I’m seeking 80K, but I’m negotiable. But usually it goes more like this INTERVIEWER: What are your salary expectations? CANDIDATE: I’m seeking 80K… INTERVIEWER: [Silently takes a note for five seconds] CANDIDATE: …but I’m negotiable. Don’t do that. The mistake here is that the candidate willingly dropped their request before hearing any objection to the number provided. In the first instance, they have altered their negotiating position before even giving the interviewer so much as an opportunity to say no. The Fix IN APPLICATIONS – When providing a salary requirement in writing, there is the option of using a single number or a range. Supplying a range could be potentially useful, as a range may account for variation between what companies offer in time off, benefits, bonus, or perks. When providing a range, expect employers to start negotiations at the bottom. Providing some brief context along with the number (“assuming competitive benefits and working conditions”) will provide an opening to negotiate above the provided number/range when necessary. Usually there will be some part of the package that can be cited as below market to justify raising an offer. If the recipient of the application feels the candidate is qualified and at least in the ballpark for the budget, contact will be made and the flexibility topic may come up early. IN INTERVIEWS – Prepare a number to ask for along with any context before the interview. It’s quite a common question, and having an answer available should provide the best results. Improvisation on this question is usually where things go wrong. When the question about compensation expectations comes up, reply with the number along with any brief and necessary clarifying context. Then, stop talking. Don’t say a word until the interviewer responds. Even if the stare down lasts a minute, say nothing. Interviewers realize you are probably a bit on edge and slightly uncomfortable during an interview. Any silence, even for just a few seconds, is commonly interpreted by candidates as a negative sign (“Uh oh, why did she stop asking questions???”). Some hiring managers or HR professionals actually have a pause built into the script in order to determine possible flexibility without having to even ask. Conclusion Never start negotiating downward until some objection is provided, and don’t mistake the silence of an interviewer as an objection.Reference: But I’m negotiable from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
software-development-2-logo

SQL Tip of the Day: Be Wary of SELECT COUNT(*)

Recently, I’ve encountered this sort of query all over the place at a customer site:                     DECLARE v_var NUMBER(10); BEGIN SELECT COUNT(*) INTO v_var FROM table1 JOIN table2 ON table1.t1_id = table2.t1_id JOIN table3 ON table2.t2_id = table3.t2_id ... WHERE some_predicate;IF (v_var = 1) THEN do_something ELSE do_something_else END IF; END; Unfortunately, COUNT(*) is often the first solution that comes to mind when we want to check our relations for some predicate. But COUNT() is expensive, especially if all we’re doing is checking our relations for existence. Does the word ring a bell? Yes, we should use the EXISTS predicate, because if we don’t care about the exact number of records that return true for a given predicate, we shouldn’t go through the complete data set to actually count the exact number. The above PL/SQL block can be rewritten trivially to this one: DECLARE v_var NUMBER(10); BEGIN SELECT CASE WHEN EXISTS ( SELECT 1 FROM table1 JOIN table2 ON table1.t1_id = table2.t1_id JOIN table3 ON table2.t2_id = table3.t2_id ... WHERE some_predicate ) THEN 1 ELSE 0 END INTO v_var FROM dual;IF (v_var = 1) THEN do_something ELSE do_something_else END IF; END; Let’s measure! Query 1 yields this execution plan: ----------------------------------------------- | Id | Operation | E-Rows | A-Rows | ----------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1 | SORT AGGREGATE | 1 | 1 | |* 2 | HASH JOIN | 4 | 4 | |* 3 | TABLE ACCESS FULL| 2 | 2 | |* 4 | TABLE ACCESS FULL| 6 | 6 | ----------------------------------------------- Query 2 yields this execution plan: ---------------------------------------------- | Id | Operation | E-Rows | A-Rows | ---------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1 | NESTED LOOPS | 4 | 1 | |* 2 | TABLE ACCESS FULL| 2 | 1 | |* 3 | TABLE ACCESS FULL| 2 | 1 | | 4 | FAST DUAL | 1 | 1 | ---------------------------------------------- You can ignore the TABLE ACCESS FULL operations, the actual query was executed on a trivial database with no indexes. What’s essential, however, are the much improved E-Rows values (E = Estimated) and even more importantly the optimal A-Rows values (A = Actual). As you can see, the EXISTS predicate could be aborted early, as soon as the first record that matches the predicate is encountered – in this case immediately. See this post about more details of how to collect Oracle Execution plans Conclusion Whenever you encounter a COUNT(*) operation, you should ask yourself if it is really needed. Do you really need to know the exact number of records that match a predicate? Or are you already happy knowing that any record matches the predicate? Answer: It’s probably the latter.Reference: SQL Tip of the Day: Be Wary of SELECT COUNT(*) from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
java-logo

Java’s Volatile Modifier

A while ago I wrote a Java servlet Filter that loads configuration in its init function (based on a parameter from web.xml). The filter’s configuration is cached in a private field. I set the volatile modifier on the field. When I later checked the company Sonar to see if it found any warnings or issues in the code I was a bit surprised to learn that there was a violation on the use of volatile. The explanation read: Use of the keyword ‘volatile’ is generally used to fine tune a Java application, and therefore, requires a good expertise of the Java Memory Model. Moreover, its range of action is somewhat misknown. Therefore, the volatile keyword should not be used for maintenance purpose and portability. I would agree that volatile is misknown by many Java programmers. For some even unknown. Not only because it’s never used much in the first place, but also because it’s definition changed since Java 1.5. Let me get back to this Sonar violation in a bit and first explain what volatile means in Java 1.5 and up (until Java 1.8 at the time of writing). What is Volatile? While the volatile modifier itself comes from C, it has a completely different meaning in Java. This may not help in growing an understanding of it, googling for volatile could lead to different results. Let’s take a quick side step and see what volatile means in C first. In the C language the compiler ordinarily assumes that variables cannot change value by themselves. While this makes sense as default behavior, sometimes a variable may represent a location that can be changed (like a hardware register). Using a volatile variable instructs the compiler not to apply these optimizations. Back to Java. The meaning of volatile in C would be useless in Java. The JVM uses native libraries to interact with the OS and hardware. Further more, it is simply impossible to point Java variables to specific addresses, so variables actually won’t change value by themselves. However, the value of variables on the JVM can be changed by different threads. By default the compiler assumes that variables won’t change in other threads. Hence it can apply optimizations such as reordering memory operations and caching the variable in a CPU register. Using a volatile variable instructs the compiler not to apply these optimizations. This guarantees that a reading thread always reads the variable from memory (or from a shared cache), never from a local cache. Atomicity Further more on a 32 bit JVM volatile makes writes to a 64 bit variable atomic (like long or double). To write a variable the JVM instructs the CPU to write an operand to a position in memory. When using the 32 bit instruction set, what if the size of a variable is 64 bits? Obviously the variable must be written with two instructions, 32 bits at a time. In multi-threaded scenarios another thread may read the variable half way through the write. At that point only first half of the variable is written. This race-condition is prevented by volatile, effectively making writes to 64 bit variables atomic on 32 bit architectures. Note that above I talked about writes not updates. Using volatile won’t make updates atomic. E.g. ++i when i is volatile would read the value of i from the heap or L3 cache into a local register, inc that register, and write the register back into the shared location of i. In between reading and writing i it might be changed by another thread. Placing a lock around the read and write instructions makes the update atomic. Or better, use non-blocking instructions from the atomic variable classes in the concurrent.atomic package. Side Effect A volatile variable also has a side effect in memory visibility. Not just changes to the volatile variable are visible to other threads, but also any side effects of the code that led up to the change are visible when a thread reads a volatile variable. Or more formally, a volatile variable establishes a happens-before relationship with subsequent reads of that variable. I.e. from the perspective of memory visibility writing a volatile variable effectively is like exiting a synchronized block and reading a volatile variable like entering one. Choosing Volatile Back to my use of volatile to initialize a configuration once and cache it in a private field. Up to now I believe the best way to ensure visibility of this field to all threads is to use volatile. I could have used AtomicReference instead. Since the field is only written once (after construction, hence it cannot be final) atomic variables communicate the wrong intent. I don’t want to make updates atomic, I want to make the cache visible to all threads. And for what it’s worth, the atomic classes use volatile too. Thoughts on this Sonar Rule Now that we’ve seen what volatile means in Java, let’s talk a bit more about this Sonar rule. In my opinion this rule is one of the flaws in configurations of tools like Sonar. Using volatile can be a really good thing to do, if you need shared (mutable) state across threads. Sure thing you must keep this to a minimum. But the consequence of this rule is that people who don’t understand what volatile is follow the recommendation to not use volatile. If they remove the modifier effectively they introduce a race-condition. I do think it’s a good idea to automatically raise red flags when misknown or dangerous language features are used. But maybe this is only a good idea when there are better alternatives to solve the same line of problems. In this case, volatile has no such alternative. Note that in no way this is intended as a rant against Sonar. However I do think that people should select a set of rules that they find important to apply, rather than embracing default configurations. I find the idea to use rules that are enabled by default, a bit naive. There’s an extremely high probability that your project is not the one that tool maintainers had in mind when picking their standard configuration. Furthermore I believe that as you encounter a language feature that you don’t know, you should learn about it. As you learn about it you can decide if there are better alternatives. Java Concurrency in Practice The de facto standard book about concurrency in the JVM is Java Concurrency in Practice by Brain Goetz. It explains the various aspects of concurrency in several levels of detail. If you use any form of concurrency in Java (or impure Scala) make sure you at least read the former three chapters of this brilliant book to get a decent high-level understanding of the matter.Reference: Java’s Volatile Modifier from our JCG partner Bart Bakker at the Software Craft blog....
software-development-2-logo

Are You Using SQL PIVOT Yet? You Should!

Every once in a while, we run into these rare SQL issues where we’d like to do something that seems out of the ordinary. One of these things is pivoting rows to columns. A recent question on Stack Overflow by Valiante asked for precisely this. Going from this table:             +------+------------+----------------+-------------------+ | dnId | propNameId | propertyName | propertyValue | +------+------------+----------------+-------------------+ | 1 | 10 | objectsid | S-1-5-32-548 | | 1 | 19 | _objectclass | group | | 1 | 80 | cn | Account Operators | | 1 | 82 | samaccountname | Account Operators | | 1 | 85 | name | Account Operators | | 2 | 10 | objectsid | S-1-5-32-544 | | 2 | 19 | _objectclass | group | | 2 | 80 | cn | Administrators | | 2 | 82 | samaccountname | Administrators | | 2 | 85 | name | Administrators | | 3 | 10 | objectsid | S-1-5-32-551 | | 3 | 19 | _objectclass | group | | 3 | 80 | cn | Backup Operators | | 3 | 82 | samaccountname | Backup Operators | | 3 | 85 | name | Backup Operators | +------+------------+----------------+-------------------+ … we’d like to transform rows into colums as such: +------+--------------+--------------+-------------------+-------------------+-------------------+ | dnId | objectsid | _objectclass | cn | samaccountname | name | +------+--------------+--------------+-------------------+-------------------+-------------------+ | 1 | S-1-5-32-548 | group | Account Operators | Account Operators | Account Operators | | 2 | S-1-5-32-544 | group | Administrators | Administrators | Administrators | | 3 | S-1-5-32-551 | group | Backup Operators | Backup Operators | Backup Operators | +------+--------------+--------------+-------------------+-------------------+-------------------+ The idea is that we only want one row per distinct dnId, and then we’d like to transform the property-name-value pairs into columns, one column per property name. Using Oracle or SQL Server PIVOT The above transformation is actually quite easy with Oracle and SQL Server, which both support the PIVOT keyword on table expressions. Here is how the desired result can be produced with SQL Server: SELECT p.* FROM ( SELECT dnId, propertyName, propertyValue FROM myTable ) AS t PIVOT( MAX(propertyValue) FOR propertyName IN ( objectsid, _objectclass, cn, samaccountname, name ) ) AS p; (SQLFiddle here) And the same query with a slightly different syntax in Oracle: SELECT p.* FROM ( SELECT dnId, propertyName, propertyValue FROM myTable ) t PIVOT( MAX(propertyValue) FOR propertyName IN ( 'objectsid' as "objectsid", '_objectclass' as "_objectclass", 'cn' as "cn", 'samaccountname' as "samaccountname", 'name' as "name" ) ) p; (SQLFiddle here) How does it work? It is important to understand that PIVOT (much like JOIN) is a keyword that is applied to a table reference in order to transform it. In the above example, we’re essentially transforming the derived table t to form the pivot table p. We could take this further and join p to another derived table as so: SELECT * FROM ( SELECT dnId, propertyName, propertyValue FROM myTable ) t PIVOT( MAX(propertyValue) FOR propertyName IN ( 'objectsid' as "objectsid", '_objectclass' as "_objectclass", 'cn' as "cn", 'samaccountname' as "samaccountname", 'name' as "name" ) ) p JOIN ( SELECT dnId, COUNT(*) availableAttributes FROM myTable GROUP BY dnId ) q USING (dnId); The above query will now allow for finding those rows for which there isn’t a name / value pair in every column. Let’s assume we remove one of the entries from the original table, the above query might now return: | DNID | OBJECTSID | _OBJECTCLASS | CN | SAMACCOUNTNAME | NAME | AVAILABLEATTRIBUTES | |------|--------------|--------------|-------------------|-------------------|-------------------|---------------------| | 1 | S-1-5-32-548 | group | Account Operators | Account Operators | Account Operators | 5 | | 2 | S-1-5-32-544 | group | Administrators | (null) | Administrators | 4 | | 3 | S-1-5-32-551 | group | Backup Operators | Backup Operators | Backup Operators | 5 | jOOQ also supports the SQL PIVOT clause through its API. What if I don’t have PIVOT? In simple PIVOT scenarios, users of other databases than Oracle or SQL Server can write an equivalent query that uses GROUP BY and MAX(CASE ...) expressions as documented in this answer here.Reference: Are You Using SQL PIVOT Yet? You Should! from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
java-logo

Default Methods: Java 8′s Unsung Heros

A few weeks ago I wrote a blog saying that developers learn new languages because they’re cool. I still stand by this assertion because the thing about Java 8 is it’s really cool. Whilst the undoubted star of the show is the addition of Lambdas and the promotion of functions to first class variables, my current favourite is default methods. This is because they’re such a neat way of adding new functionality to existing interfaces without breaking old code. The implementation is simple: take an interface, add a concrete method and attach the keyword default as a modifier. The result is that suddenly all existing implementations of your interface can use this code. In this first, simple example, I’ve added default method that returns the version number of an interface1. public interface Version {   /**    * Normal method - any old interface method:    *    * @return Return the implementing class's version    */   public String version();   /**    * Default method example.    *    * @return Return the version of this interface    */   default String interfaceVersion() {     return "1.0";   } } You can then call this method on any implementing class. public class VersionImpl implements Version {   @Override   public String version() {     return "My Version Impl";   } } You may ask: why is this cool? If you take the java.lang.Iterable interface and add the following default method you get the death of the for loop.   default void forEach(Consumer<? super T> action) {     Objects.requireNonNull(action);     for (T t : this) {       action.accept(t);     }   } The forEach method takes an instance of a class that implements the Consumer<T> interface as an argument. The Consumer<T> can be found in the new java.util.function package and is what Java 8 calls a functional interface, which is an interface containing only one method. In this case it’s the method accept(T t) that takes one argument and has a void return. The java.util.function package is probably one of the most important packages in Java 8. It contains a whole bunch of single method, or functional, interfaces that describe common function types. For example, Consumer<T> contains a function that takes one argument and has a void return, whilst Predicate<T> is an interface with a function that takes one argument and returns a boolean, which is generally used to write filtering lambdas. The implementation of this interface should contain whatever it is that you previously wrote between your for loops brackets. So what, you may think, what does that give me? If this wasn’t Java 8 then the answer is “not much”. To use the forEach(…) method pre Java 8 you’d need to write something like this:     List<String> list = Arrays.asList(new String[] { "A", "FirsT", "DefaulT", "LisT" });     System.out.println("Java 6 version - anonymous class");     Consumer<String> consumer = new Consumer<String>() {       @Override       public void accept(String t) {         System.out.println(t);       }     };     list.forEach(consumer); But, if you combine this with lambda expressions or method references you get the ability to write some really cool looking code. Using a method reference, the previous example becomes:     list.forEach(System.out::println); You can do the same thing with a lambda expression:     list.forEach((t) -> System.out.println(t)); All this seems to be in keeping with one of the big ideas behind Java 8: let the JDK do the work for you. To paraphrase statesman and serial philanderer John F Kennedy “ask not what you can do with your JDK ask what your JDK can do for you”2. Design Problems of Default Methods That’s the new cool way of writing the ubiquitous for loop, but are there are problems with adding default methods to interfaces and if so, what are they and how did the guys on the Java 8 project fix them? The first one to consider is inheritance. What happens when you have an interface which extends another interface and both have a default method with the same signature? For example, what happens if you have SuperInterface extended by MiddleInterface and MiddleInterface extended by SubInterface? public interface SuperInterface {   default void printName() {     System.out.println("SUPERINTERFACE");   } } public interface MiddleInterface extends SuperInterface {   @Override   default void printName() {     System.out.println("MIDDLEINTERFACE");   } } public interface SubInterface extends MiddleInterface {   @Override   default void printName() {     System.out.println("SUBINTERFACE");   } } public class Implementation implements SubInterface {   public void anyOldMethod() {     // Do something here   }   public static void main(String[] args) {     SubInterface sub = new Implementation();     sub.printName();     MiddleInterface middle = new Implementation();     middle.printName();     SuperInterface sup = new Implementation();     sup.printName();   } } No matter which way you cut it, printName() will always print “SUBINTERFACE”. The same question arises when you have a class and an interface containing the same method signature: which method is run? The answer is the ‘class wins’ rule. Interface default methods will always be ignored in favour of class methods. public interface AnyInterface {   default String someMethod() {     return "This is the interface";   } } public class AnyClass implements AnyInterface {   @Override   public String someMethod() {     return "This is the class - WINNING";   } } Running the code above will always print out: “This is the class – WINNING” Finally, what happens if a class implements two interfaces and both contain methods with the same signature? This is the age-old C++ diamond problem; how do you solve the ambiguity? Which method is run? public interface SuperInterface {   default void printName() {     System.out.println("SUPERINTERFACE");   } } public interface AnotherSuperInterface {   default void printName() {     System.out.println("ANOTHERSUPERINTERFACE");   } } In Java 8’s case the answer is neither. If you try to implement both interfaces, then you’ll get the following error: Duplicate default methods named printName with the parameters () and () are inherited from the types AnotherSuperInterface and SuperInterface. In the case where you absolutely MUST implement both interfaces, then the solution is to invoke the ‘class wins’ rule and override the ambiguous method in your implementation. public class Diamond implements SuperInterface, AnotherSuperInterface {   /** Added to resolve ambiguity */   @Override   public void printName() {     System.out.println("CLASS WINS");   }   public static void main(String[] args) {     Diamond instance = new Diamond();     instance.printName();   } } When to Use Default Methods From a purist point of view the addition of default methods means that Java interfaces are no longer interfaces. Interfaces were designed as a specification or contract for proposed/intended behaviour: a contract that the implementing class MUST fulfil. Adding default methods means that there is virtually no difference between interfaces and abstract base classes3. This means that they’re open to abuse as some inexperienced developers may think it cool to rip out base classes from their codebase and replace them with default method based interfaces – just because they can, whilst others may simply confuse abstract classes with interfaces implementing default methods. I’d currently suggest using default methods solely for their intended use case: evolving legacy interfaces without breaking existing code. Though I may change my mind.   1It’s not very useful, but it demonstrates a point… 2 John F Kennedy inauguration speech January 20th 1961. 3 Abstract base classes can have a constructor whilst interfaces can’t. Classes can have private instance variables (i.e. state); interfaces can’t.Reference: Default Methods: Java 8′s Unsung Heros from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
javafx-logo

Validation in java (javafx)

Validation is one thing that’s missing from the core javafx framework. To fill in this gap there is already a 3rd party validation library that’s present in controlsfx. However there’s one issue I have with it: it wasn’t created with FXML in mind. That’s not to say it isn’t a good library, it just misses this detail and for me this is a no go. Because of that I decided to create my own validation framework: FXValidation. How it works To show you how FXValidation works let’s start from the bottom up, by showing you an example of what an FXML file might look like when using this library. This is a simple example of a login screen where the user needs to enter both an user name and a password: <Label> <text>User Name:</text> </Label> <TextField fx:id="userName" id="userName"></TextField> <Label> <text>Password:</text> </Label> <PasswordField fx:id="password" id="password"></PasswordField><Button text="Submit" onAction="#submitPressed"></Button><fx:define> <RequiredField fx:id="requiredField1" > <srcControl> <fx:reference source="userName"></fx:reference> </srcControl> </RequiredField> <RequiredField fx:id="requiredField2" > <srcControl> <fx:reference source="password"></fx:reference> </srcControl> </RequiredField> </fx:define><ErrorLabel message="Please enter your username"> <validator> <fx:reference source="requiredField1"></fx:reference> </validator> </ErrorLabel> <ErrorLabel message="Please enter your password"> <validator> <fx:reference source="requiredField2"></fx:reference> </validator> </ErrorLabel> On the beginning of the FXML snippet I define a textfield and password field for entering the login details. Other than that there’s also a submit button so the user may send the login information to the system. After that comes the interesting part. First we define a couple of validators of type RequiredField. This validators, check whether the input in question is empty and if so they store that the validation has errors in a flag. There’s also other types of validators built-in the FXValidation framework but we’ll get to that in a bit. Finally we define a couple of ErrorLabels. This are nodes that implement IValidationDisplay, any class that implements this interface is a class whose purpose is to display information to the user whenever there is an error in the validation process. Currently there is only one of this classes in the framework: the ErrorLabel. Finally we need to call validation when the user clicks the submit button, this is done in the controller on the submit method: public void submitPressed(ActionEvent actionEvent) { requiredField1.eval(); requiredField2.eval(); } This will trigger validation for the validators that we have defined. If there are errors the ErrorLabels will display the error message that was defined in them. There’s also one extra thing the validators do: they add in the css style class of “error” to every control that is in error after the validation process has taken effect. This allows the programmer to style the controls differently using css whenever this controls have the error class appended to them. The programmer can check for errors in the validation process by checking the property hasErrors in the validators. And here’s our example in action:The details From what I’ve shown you above we can see that there are basically 2 types of classes involved:The validator: takes care of checking if the target control (srcControl) conforms to the validation rule. If not it appends the “error” style class to target control sets its hasErrors property to true. All validators extend from ValidatorBase. The error display information: this takes care of informing the user what went wrong with the validation, it might be that the field is required, the fields content doesn’t have the necessary number of characters, etc. All this classes implement IValidationDisplay.In the library there are currenctly 3 validators and only one error “displayer” which is ErrorLabel. The validators are the following:RequiredField: Checks whether the target control (srcControl) has content, if it doesn’t it gives an error. CardinalityValidator: Checks whether the target control (srcControl) has at least a min number of characters and a maximum of max number of characters. RegexValidator: Checks the content of the target control (srcControl) against a given regular expressionAnd that’s it.Reference: Validation in java (javafx) from our JCG partner Pedro Duque Vieira at the Pixel Duke blog....
software-development-2-logo

INTEL Perceptual Computing – RealSense Challenge 2014

           Perceptual Computing technology is redefining the boundaries between human and computer interaction. Intel invites you to claim your share of history by designing new, leading edge perceptual computing Apps. RealSense Challenge 2014 is a new contest in which developers are challenged to design perceptual computing apps. At the heart of this competition, the new Intel RealSense 3D camera and SDK allow to interact with computer by supporting hand/finger tracking, facial analysis, speech recognition, background subtraction and augmented reality.  RealSense Challenge 2014 The competition has two phases: Ideation and Development. The ideation phase will be opened until the end of September, all you are asked to do is to submit your ideas (as an individual or as a team) and try to be within the 1300 participants who will be invited to turn their ideas into working demos. Everyone participating to the development phase will be loaned the Intel 3D camera and RealSense SDK for C/C++ development. There are also two tracks for this challenge. The Pioneer track is open to all developers from around the world whereas the Ambassador track is only open to developers who submitted a demo to one of Intel Perceptual Computing Challenge 2013 or to its Ultimate Coder Challenge. Up to 1000 Pioneers and 300 Ambassadors will be chosen to move forward on the Development phase. Both contest tracks will accept entries from participants in the following Innovation categories :Gaming + Play Learning Entertainment Interact naturally Collaboration/Creation Open innovationThere are $1 Million cash prizes to be shared by the Pioneer and Ambassador groups. Each track will compete independantly though.  PioneerAmbassadorGRAND PRIZE (1) $25,000 One overall winner chosen from the first place winners of each category will win an additional $25,000 cash prize.GRAND PRIZE (1) $50,000 One overall winner chosen from the first place winners of each category will win an additional $50,000 cash prize.FIRST PLACE (5) $25,000 The top scoring demo in each category will win a $25,000 cash prizeFIRST PLACE (5) $50,000 The top scoring demo in each category will win a $50,000 cash prizeSECOND PLACE (10) $10,000 Two demos from each of the 5 categories will receive a $10,000 cash prizeSECOND PLACE (10) $20,000 Two demos from each of the 5 categories will receive a $20,000 cash prizeEARLY SUBMISSION (50) $1,000 The top scoring demos, submitted prior to the Early submission deadline, across all 5 categories will each receive a cash prize of $1,000EARLY SUBMISSION (30) $1,000 The top scoring demos, submitted prior to the Early submission deadline, across all 5 categories will each receive a cash prize of $1,000HASWELL NUC (250) The top 250 scoring demos from Phase 1, across all 5 categories, will receive a Haswell NUC device valued at nearly $600.HASWELL NUC (50) The top 50 scoring demos from Phase 1, across all 5 categories, will receive a Haswell NUC device valued at nearly $600.  If you are a Pioneer, you can sign up today and have until October 1, 2014 to submit your idea. If you are an Ambassador, you can simply go to the challenge page and sign-in with the email address used for the 2013 competition. More info and subscription on the RealSense Challenge 2014 page On the side, Intel organizes two webinars for  developers to get inspired and to learn more about natural user interface and RealSense technology : Webinar 1 : Learn about gesture recognition technology, on technical side – August 13th 2014 1pm Eastern  Webinar 2 : Wide variety of usages for natural user interface – August 20th 2014 1pm Eastern...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close