Featured FREE Whitepapers

What's New Here?


JavaFX Tip 12: Define Icons in CSS

When you are a UI developer coming from Swing like me then there is a good chance that you are still setting images / icons directly in your code. Most likely something like this:                 import javafx.scene.control.Label; import javafx.scene.image.ImageView;public class MyLabel extends Label {public MyLabel() { setGraphic(new ImageView(MyLabel.class. getResource("image.gif").toExternalForm())); } }  In this example the image file is looked up via Class.getResource(), the URL is passed to the constructor of the ImageView node and this node is set as the “graphic” property on the label. This approach works perfectly well but with JavaFX there is a more elegant way. You can put the image definition into a CSS file, making it easy for you and / or others to replace it (the marketing department has decided to change the corporate identity once again). The same result as above can be achieved this way: import javafx.scene.control.Label;public class CSSLabel extends Label {public CSSLabel() { getStyleClass().add("folder-icon"); } } Now you obviously need a CSS file as well: .folder-icon { -fx-graphic: url("image.gif"); } And in your application you need to add the stylesheet to your scene graph. Here we are adding it to the scene. import javafx.application.Application; import javafx.geometry.Pos; import javafx.scene.Scene; import javafx.stage.Stage;public class MyApplication extends Application {public void start(Stage primaryStage) throws Exception { CSSLabel label = new CSSLabel(); label.setText("Folder"); label.setAlignment(Pos.CENTER); Scene scene = new Scene(label); scene.getStylesheets().add(MyApplication.class. getResource("test.css").toExternalForm()); primaryStage.setScene(scene); primaryStage.setTitle("Image Example"); primaryStage.setWidth(250); primaryStage.setHeight(100); primaryStage.show(); }public static void main(String[] args) { launch(args); } } With this approach you have a clean separation of your controls and their apperance and you allow for easy customization as well.Reference: JavaFX Tip 12: Define Icons in CSS from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....

Integrate apps with Neo4j using Zapier

Recently, I was directed to Zapier to get some lightweight integration done between systems for a quick proof of concept. Initially skeptical, I found that it really could save time and tie together all those pieces of your system you never got around to integrating. Moreover, it is a way for people to integrate the applications they use without having to code or pay a developer to do it for you. Going through the Zapbook, I found MongoDB, MySQL, Postgresql, SQL Server and gasp! no Neo4j. Sad.   I already had a potential use case which was to collect data via a form and get it into Neo4j ASAP i.e. no coding. Google Forms is available on Zapier, so I went about making Neo4j available as well. I’ve now got a first version zap ready for Neo4j which allows one to collect data triggered by another zap, and save it to Neo4j via a Cypher statement. Here’s what it looks like. Using the Google Forms example, I’ve set up a form to capture feedback about a product and I want to push this data into Neo4j every time the form is submitted. Step 1: Log into Zapier, click on Make a Zap! Step 2: The triggering app is Google Docs, where we want to save data to Neo4j every time a form is filled i.e. the spreadsheet backing the form has a new row inserted. The Neo4j zap currently supports only one action- Update the graph.Step 3: Follow the instructions to make sure Zapier can access your Google Docs account Step 4: Set up a Neo4j account. Call it whatever you like, supply the username, password and URL. Note that in this version, the assumption is that your Neo4j database is not left open to the world. I used the Authentication extension to set mine up.  Click on Continue and make sure Zapier confirms that it can indeed access your Neo4j database  Step 5: Select your spreadsheet and the Worksheet that contains the data. Here’s what my spreadsheet looks like-Step 6: Write a Cypher query to convert that row into nodes and relationships. You must write a parameterized Cypher query in the Cypher Query field. The Cypher Parameters must contain a comma separated list of the parameter names used in the query and the field selected from the triggering app (use the Insert Fields button).  Step 7: See what the trigger and action samples look like- then test it out and celebrate when it says Success!  I checked what my database looked like at this point and sure enough:That’s all there is to it. Zapier will poll the triggering app every 15 minutes so by the time all your forms are filled, you have a Neo4j database filled with data! I tried out the MongoDB->Neo4j and Trello->Neo4j integration and they worked well. Whether you need a quick and dirty integration with Neo4j, or you want to collect data from other applications into Neo4j for later analysis,  or you’re building a serious application, Zapier could be of use. If you’d like to try it out, send @luannem a message and I’ll send you a beta invite. And if you think this is useful, I’d be happy to hear about it and add more features to the Neo4j zap!Reference: Integrate apps with Neo4j using Zapier from our JCG partner Luanne Misquitta at the Thought Bytes blog....

9 Differences between TCP and UDP Protocol – Java Network Interview Question

TCP and UDP are two transport layer protocols, which are extensively used in internet for transmitting data between one host to another. Good knowledge of how TCP and UDP works is essential for any programmer. That’s why differences between TCP and UDP is a popular Java programming interview question. I have seen this question many times on various Java interviews , especially for server side Java developer positions. Since FIX (Financial Information Exchange) protocol is also a TCP based protocol, several investment banks, hedge funds, and exchange solution providers look for Java developers with good knowledge of TCP and UDP. Writing FIX engines and server side components for high speed electronic trading platforms requires capable developers with a solid understanding of the fundamentals including data structure, algorithms and networking. By the way, use of TCP and UDP is not limited to one area, it’s at the heart of internet. The protocol which is core of internet, HTTP is based on TCP. One more reason, why a Java developer should understand these two protocols in detail is that Java is extensively used to write multi-threaded, concurrent and scalable servers. Java also provides a rich Socket programming API for both TCP and UDP based communication. In this article, we will learn the key differences between the TCP and UDP protocols. To start with, TCP stands for Transmission Control Protocol and UDP stands for User Datagram Protocol, and both are used extensively to build Internet applications. Differences between TCP vs UDP Protocol I love to compare two things on different points, this not only makes them easy to compare but also makes it easy to remember differences. When we compare TCP to UDP, we learn difference in how both TCP and UDP works, we learn which one provides reliable and guaranteed delivery and which doesn’t. Which protocol is fast and why, and most importantly when to choose TCP over UDP while building your own distributed application. In this article we will see difference between UDP and TCP in 9 points, e.g. connection set-up, reliability, ordering, speed, overhead, header size, congestion control, application, different protocols based upon TCP and UDP and how they transfer data.Connection oriented vs Connection lessThe first and foremost difference between them is that TCP is a connection oriented protocol while UDP is a connection-less protocol. This means  a connection is established between client and server, before they can send data over TCP. Connection establishment process is also known as TCP hand shaking where control messages are interchanged between client and server. The image here describes the process of a TCP handshake, where control messages are exchanged between client and server. Client, which is the initiator of TCP connection, sends a SYN message to the server, which is listening on a TCP port. Server receives and sends a SYN-ACK message, which is received by client again and responded using ACK. Once the server receive this ACK message,  the TCP connection is established and ready for data transmission. On the other hand, UDP is a connection less protocol, and point to point connection is not established before sending messages. That’s the reason why, UDP is more suitable for multicast distribution of message, one to many distribution of data in single transmission.ReliabilityTCP provides delivery guarantee, which means a message sent using TCP protocol is guaranteed to be delivered to the client. If a message is lost in transit then it is recovered using resending, which is handled by the TCP protocol itself. On the other hand, UDP is unreliable, it doesn’t provide any delivery guarantee. A datagram package may be lost in transit. That’s why UDP is not suitable for programs which require guaranteed delivery.OrderingApart from delivery guarantee, TCP also guarantees order of message. The messages will be delivered to the client in the same order that the server has sent them, though its possible they may reach out of order to the other end of the network. TCP protocol will do all the sequencing and ordering for you. UDP doesn’t provide any ordering or sequencing guarantee. Datagram packets may arrive in any order. That’s why TCP is suitable for application which need delivery in sequenced manner, though there are UDP based protocols as well which provide ordering and reliability by using sequence number and redelivery e.g. TIBCO Rendezvous, which is actually a UDP based application.Data BoundaryTCP does not preserve data boundary, UDP does. In Transmission control protocol, data is sent as a byte stream, and no distinguishing indications are transmitted to signal message (segment) boundaries. On UDP, Packets are sent individually and are checked for integrity only if they arrived. Packets have definite boundaries which are honoured upon receipt, meaning a read operation at the receiver socket will yield an entire message as it was originally sent. Though TCP will also deliver complete message after assembling all bytes. Messages are stored on TCP buffers before sending to make optimum use of network bandwidth.SpeedIn one word, TCP is slow and UDP is fast. Since TCP does has to create connection, ensure guaranteed and ordered delivery, it does lot more than UDP. This costs TCP in terms of speed, that’s why UDP is more suitable where speed is a concern, for example online video streaming, telecast or online multi player games.Heavy weight vs Light weightBecause of the overhead mentioned above, Transmission control protocol is considered as heavy weight as compared to light weight UDP protocol. The simple mantra of UDP is to deliver messages without bearing any overhead of creating a connection and guaranteeing delivery or order guarantee. This is also reflected in their header sizes, which is used to carry meta data.Header sizeTCP has a bigger header than UDP. Usual header size of a TCP packet is 20 bytes which is more than double of 8 bytes, header size of UDP datagram packet. TCP header contains Sequence Number, Ack number, Data offset, Reserved, Control bit, Window, Urgent Pointer, Options, Padding, Check Sum, Source port, and Destination port. While UDP header only contains Length, Source port, Destination port, and Check Sum. Here is how TCP and UDP header looks like : Congestion or Flow controlTCP does Flow Control. TCP requires three packets to set up a socket connection, before any user data can be sent. TCP handles reliability and congestion control. On the other hand, UDP does not have an option for flow control.Usage and applicationWhere does TCP and UDP are used in internet? After knowing key differences between TCP and UDP, we can easily conclude, which situation suits them. Since TCP provides delivery and sequencing guaranty, it is best suited for applications that require high reliability, and transmission time is relatively less critical. While UDP is more suitable for applications that need fast, efficient transmission, such as games. UDP’s stateless nature is also useful for servers that answer small queries from huge numbers of clients. In practice, TCP is used in the finance domain e.g. the FIX protocol is a TCP based protocol, while UDP is used heavily in gaming and entertainment sites.TCP and UDP based ProtocolsOne of the best example of TCP based higher end protocol is HTTP and HTTPS, which is everywhere on internet. In fact most of the common protocols you are familiar of e.g. Telnet, FTP and SMTP all are based over Transmission Control Protocol. UDP don’t have anything as popular as HTTP but it is also extensively used in protocol like DHCP and DNS. Some of the other protocols which are based on the User Datagram protocol are Simple Network Management Protocol (SNMP), TFTP, BOOTP and NFS (early versions). Always remember to mention that TCP is connection oriented, reliable, slow, provides guaranteed delivery and preservers order of messages, while UDP is connection less, unreliable, no ordering guarantee, but a fast protocol. TCP overhead is also much higher than UDP, as it transmits more meta data per packet than UDP. It’s worth mentioning that header size of Transmission control protocol is 20 bytes, compared to 8 bytes header of User Datagram protocol. Use TCP, if you can’t afford to lose any message, while UDP is better for high speed data transmission, where loss of single packet is acceptable e.g. video streaming or online multi player games. While working in TCP/UDP based application on Linux, it’s also good to remember basic networking commands e.g. telnet and netstat, they help tremendously to debug or troubleshoot any connection issue.Reference: 9 Difference between TCP and UDP Protocol – Java Network Interview Question from our JCG partner Javin Paul at the Javarevisited blog....

Java Keystore Tutorial

                     Table Of Contents1. Introduction 2. SSL and how it works 3. Private Keys 4. Public Certificates 5. Root Certificates 6. Certificate Authorities 7. Certificate Chain 8. Keystore using Java keytool 9. Keystore Commands 10. Configure SSL using Keystores and Self Signed Certificates on Apache Tomcat  1. Introduction Who of us didn’t visit ebay, amazon to buy anything or his personal bank account to check it. Do you think that those sites are secure enough to put your personal data like (credit card number or bank account number, etc.,)? Most of those sites use the Socket Layer (SSL) protocol to secure their Internet applications. SSL allows the data from a client, such as a Web browser, to be encrypted prior to transmission so that someone trying to sniff the data is unable to decipher it. Many Java application servers and Web servers support the use of keystores for SSL configuration. If you’re building secure Java programs, learning to build a keystore is the first step. 2. SSL and how it works A HTTP-based SSL connection is always initiated by the client using a URL starting with https:// instead of with http://. At the beginning of an SSL session, an SSL handshake is performed. This handshake produces the cryptographic parameters of the session. A simplified overview of how the SSL handshake is processed is shown in the diagram below.This is in short how it works:A browser requests a secure page (usually https://). The web server sends its public key with its certificate. The browser checks that the certificate was issued by a trusted party (usually a trusted root CA), that the certificate is still valid and that the certificate is related to the site contacted. The browser then uses the public key, to encrypt a random symmetric encryption key and sends it to the server with the encrypted URL required as well as other encrypted http data. The web server decrypts the symmetric encryption key using its private key and uses the symmetric key to decrypt the URL and http data. The web server sends back the requested html document and http data encrypted with the symmetric key. The browser decrypts the http data and html document using the symmetric key and displays the information.The world of SSL has, essentially, three types of certificates: private keys, public keys (also called public certificates or site certificates), and root certificates. 3. Private Keys The private key contains the identity information of the server, along with a key value. It should keep this key safe and protected by password because it’s used to negotiate the hash during the handshake. It can be used by someone to decrypt the traffic and get your personal information. It like leaving your house key in the door lock. 4. Public Certificates The public certificate (public key) is the portion that is presented to a client, it likes your personal passport when you show in the Airport. The public certificate, tightly associated to the private key, is created from the private key using a Certificate Signing Request (CSR). After you create a private key, you create a CSR, which is sent to your Certificate Authority (CA). The CA returns a signed certificate, which has information about the server identity and about the CA. 5. Root Certificates Root CA Certificate is a CA Certificate which is simply a Self-signed Certificate. This certificate represents a entity which issues certificate and is known as Certificate Authority or the CA such as VeriSign, Thawte, etc. 6. Certificate Authorities Companies who will sign certificates for you such as VeriSign, Thawte, Commodo, GetTrust. Also, many companies and institutions act as their own CA, either by building a complete implementation from scratch, or by using an open source option, such as OpenSSL. 7. Certificate Chain When a server and client establish an SSL connection, a certificate is presented to the client; the client should determine whether to trust this certificate, a process called the certificate chain. The client examines the issuer of a certificate, searches its list of trusted root certificates, and compares the issuer on the presented certificate to the subjects of the trusted certificates. If a match is found, the connection proceeds. If not, the Web browsers may pop up a dialog box, warning you that it cannot trust the certificate and offering the option to trust the certificate. 8. Keystore using Java keytool Java Keytool is a key and certificate management utility. It allows users to manage their own public/private key pairs and certificates. Java Keytool stores the keys and certificates in what is called a keystore. It protects private keys with a password. Each certificate in a Java keystore is associated with a unique alias. When creating a Java keystore you will first create the .jks file that will initially only contain the private key, then generate a CSR. Then you will import the certificate to the keystore including any root certificates. 9. Keystore Commands Create Keystore, Keys and Certificate RequestsGenerate a Java keystore and key pair keytool -genkey -alias mydomain -keyalg RSA -keystore keystore.jks -storepass passwordGenerate a certificate signing request (CSR) for an existing Java keystore keytool -certreq -alias mydomain -keystore keystore.jks -storepass password -file mydomain.csr Generate a keystore and self-signed certificate keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass password -validity 360Import CertificatesImport a root or intermediate CA certificate to an existing Java keystore keytool -import -trustcacerts -alias root -file Thawte.crt -keystore keystore.jks -storepass passwordImport a signed primary certificate to an existing Java keystore keytool -import -trustcacerts -alias mydomain -file mydomain.crt -keystore keystore.jks -storepass passwordExport CertificatesExport a certificate from a keystore keytool -export -alias mydomain -file mydomain.crt -keystore keystore.jks -storepass passwordCheck/List/View CertificatesCheck a stand-alone certificate keytool -printcert -v -file mydomain.crtCheck which certificates are in a Java keystore keytool -list -v -keystore keystore.jks -storepass passwordCheck a particular keystore entry using an alias keytool -list -v -keystore keystore.jks -storepass password -alias mydomainDelete CertificatesDelete a certificate from a Java Keytool keystore keytool -delete -alias mydomain -keystore keystore.jks -storepass passwordChange PasswordsChange a Java keystore password keytool -storepasswd -new new_storepass -keystore keystore.jks -storepass passwordChange a private key password keytool -keypasswd -alias client -keypass old_password -new new_password -keystore client.jks -storepass password10. Configure SSL using Keystores and Self Signed Certificates on Apache TomcatGenerate new keystore and self-signed certificateusing this command, you will prompt to enter specific information such as user name, organization unit, company and location. keytool -genkey -alias tomcat -keyalg RSA -keystore /home/ashraf/Desktop/JavaCodeGeek/keystore.jks -validity 360You can list the certificate details you just created using this command keytool -list -keystore /home/ashraf/Desktop/JavaCodeGeek/keystore.jksDownload Tomcat 7 Configure Tomcat’s server to support for SSL or https connection. Adding a connector element in Tomcat\conf\server.xml <Connector port="8443" maxThreads="150" scheme="https" secure="true" SSLEnabled="true" keystoreFile="/home/ashraf/Desktop/JavaCodeGeek/.keystore" keystorePass="password" clientAuth="false" keyAlias="tomcat" sslProtocol="TLS" />Start Tomcat and go tohttps://localhost:8443/, you will find the following security issue where the browser will present untrusted error messages. In the case of e-commerce, such error messages result in immediate lack of confidence in the website and organizations risk losing confidence and business from the majority of consumers, that's normal as your certificate isn't signed yet by CA such as Thawte or Verisign who will verify the identity of the requester and issue a signed certificate.You can click Proceed anyway till you receive you signed certificate....

Enterprise Integration Patterns (EIP) Revisited in 2014

Today, I had a talk about “Enterprise Integration Patterns (EIP) Revisited in 2014″ at Java Forum Stuttgart 2014, a great conference for developers and architects with 1600 attendees. Enterprise Integration Patterns Data exchanges between companies increase a lot. Hence, the number of applications which must be integrated increases, too. The emergence of service-oriented architectures and cloud computing boost this even more. The realization of these integration scenarios is a complex and time-consuming task because different applications and services do not use the same concepts, interfaces, data formats and technologies. Originated and published over ten years ago by Gregor Hohpe and Bobby Woolf,  Enteprise Integration Patterns (EIP) became the world wide de facto standard for describing integration problems. They offer a standardized way to split huge, complex integration scenarios into smaller recurring problems. These patterns appear in almost every integration project. Most developers already have used some of these patterns such as the filter, splitter or content-based-router – some of them without being aware of using EIPs. Today, EIPs are still used to reduce efforts and complexity a lot. This session revisits EIPs and gives an overview about the status quo. Open Source, Apache Camel, Talend ESB, JBoss, WSO2, TIBCO BusinessWorks, StreamBase, IBM WebSphere, Oracle, … Fortunately, EIPs offer more possibilities than just be used for modelling integration problems in a standardized way. Several frameworks and tools already implement these patterns. The developer does not have to implement EIPs on his own. Therefore, the end of the session shows different frameworks and tools available, which can be used for modelling and implementing complex integration scenarios by using the EIPs. SlidesReference: Enterprise Integration Patterns (EIP) Revisited in 2014 from our JCG partner Kai Waehner at the Blog about Java EE / SOA / Cloud Computing blog....

Writing Tests for Data Access Code – Don’t Test the Framework

When we write tests to our data access code, should we test every method of its public API? It sounds natural at first. After all, if we don’t test everything, how can we know that our code works as expected? That question provides us an important clue: Our code.       We should write tests only to our own code. What Is Our Own Code? It is sometimes hard to identify the code which we should test. The reason for this is that our data access code is integrated tightly with the library or framework which we use when we save information to the used data storage or read information from it. For example, if we want to create a Spring Data JPA repository which provides CRUD operations to Todo objects, we should create an interface which extends the CrudRepository interface. The source code of the TodoRepository interface looks as follows: import org.springframework.data.repository.CrudRepository;public TodoRepository extends CrudRepository<Todo, Long> {} Even though we haven’t added any methods to our repository interface, the CrudRepository interface declares many methods which are available to the classes that use our repository interface. These methods are not our code because they are implemented and maintained by the Spring Data team. We only use them. On the other hand, if we add a custom query method to our repository, the situation changes. Let’s assume that we have to find all todo entries whose title is equal to the given search term. After we have added this query method to our repository interface, its source code looks as follows: import org.springframework.data.repository.CrudRepository; import org.springframework.data.repository.query.Param;public TodoRepository extends CrudRepository<Todo, Long> {@Query("SELECT t FROM Todo t where t.title=:searchTerm") public List<Todo> search(@Param("searchTerm") String searchTerm) } It would be easy to claim that this method is our own code and that is why we should test it. However, the truth is a bit more complex. Even though the JPQL query was written by us, Spring Data JPA provides the code which passes that query forward to the used JPA provider. And still, I think that this query method is our own code because the most essential part of it was written by us. If we want to identify our own data access code, we have to locate the essential part of each method. If this part was written by us, we should treat that that method as our own code. This is all pretty obvious, and the more interesting question is: Should We Test It? Our repository interface provides two kinds of methods to the classes which use it:It provides methods that are declared by the CrudRepository interface. It provides a query method that was written by us.Should we write integration tests to the TodoRepository interface and test all of these methods? No. We should not do this becauseThe methods declared by the CrudRepository interface are not our own code. This code is written and maintained by the Spring Data team, and they have ensured that it works. If we don’t trust that their code works, we should not use it. Our application probably has many repository interfaces which extend the CrudRepository interface. If we decide to write tests to the methods declared by the CrudRepository interface, we have to write these tests to all repositories. If we choose this path, we will spend a lot of time writing tests to someone else’s code, and frankly, it is not worth it. Our own code might be so simple that writing tests to our repository makes no sense.In other words, we should concentrate on finding an answer to this question: Should we write integration tests to our repository methods (methods which were written by us), or should we just write end-to-end tests? The answer to this question depends from the complexity of our repository method. I am aware that complexity is a pretty vague word, and that is why we need a some kind of guideline that will help us to find the best way of testing our repository methods. One way to make this decision is to think about the amount of work which is required to test the every possible scenario. This makes sense because:It takes less work to write integration tests to a single repository method than to write the same tests to the feature that uses the repository method. We have to write end-to-end anyway.That is why it makes sense to minimize our investment (time) and maximize our profits (test coverage). We can do this by following these rules:If we can test all possible scenarios by writing only a few tests, we shouldn’t waste our time for writing integration tests to our repository method. We should write end-to-end tests which ensure that the feature is working as expected. If we need to write more than a few tests, we should write integration tests to our repository method, and write only a few end-to-end tests (smoke tests).Summary This blog post has taught us two things:We should not waste our time for writing tests to a data access framework (or library) written by someone else. If we don’t trust that framework (or library), we should not use it. Sometimes we should not write integration tests to our data access code either. If the tested code is simple enough (we can cover all situations by writing a few tests), we should test it by writing end-to-end tests.Reference: Writing Tests for Data Access Code – Don’t Test the Framework from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....

Converting XML to CSV using XSLT 1.0

This post shows you how to convert a simple XML file to CSV using XSLT. Consider the following sample XML:                 <library> <book> <author>Dan Simmons</author> <title>Hyperion</title> <publishDate>1989</publishDate> </book> <book> <author>Douglas Adams</author> <title>The Hitchhiker's Guide to the Galaxy</title> <publishDate>1979</publishDate> </book> </library> This is the desired CSV output: author,title,publishDate Dan Simmons,Hyperion,1989 Douglas Adams,The Hitchhiker's Guide to the Galaxy,1979 The following XSL Style Sheet (compatible with XSLT 1.0) can be used to transform the XML into CSV. It is quite generic and can easily be configured to handle different xml elements by changing the list of fields defined ar the beginning. <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text" /><xsl:variable name="delimiter" select="','" /><!-- define an array containing the fields we are interested in --> <xsl:variable name="fieldArray"> <field>author</field> <field>title</field> <field>publishDate</field> </xsl:variable> <xsl:param name="fields" select="document('')/*/xsl:variable[@name='fieldArray']/*" /><xsl:template match="/"><!-- output the header row --> <xsl:for-each select="$fields"> <xsl:if test="position() != 1"> <xsl:value-of select="$delimiter"/> </xsl:if> <xsl:value-of select="." /> </xsl:for-each><!-- output newline --> <xsl:text> </xsl:text><xsl:apply-templates select="library/book"/> </xsl:template><xsl:template match="book"> <xsl:variable name="currNode" select="." /><!-- output the data row --> <!-- loop over the field names and find the value of each one in the xml --> <xsl:for-each select="$fields"> <xsl:if test="position() != 1"> <xsl:value-of select="$delimiter"/> </xsl:if> <xsl:value-of select="$currNode/*[name() = current()]" /> </xsl:for-each><!-- output newline --> <xsl:text> </xsl:text> </xsl:template> </xsl:stylesheet> Let’s try it out: $ xsltproc xml2csv.xsl books.xml author,title,publishDate Dan Simmons,Hyperion,1989 Douglas Adams,The Hitchhiker's Guide to the Galaxy,1979Reference: Converting XML to CSV using XSLT 1.0 from our JCG partner Fahd Shariff at the fahd.blog blog....

JavaFX Tip 11: Updating Read-Only Properties

Custom controls often feature “read-only” properties. This means that they can not be set from outside the control, not even from their own skin class. It is often the behaviour of a control that leads to a change of the read-only property. In JavaFX this behaviour can be implemented in the control itself and in the skin. So we sometimes end up with a skin wanting to update a read-only property of the control. How can this be done?           Backdoor: Property Map The solution is quite simple: use the properties map of the control as a backdoor to the control class. The properties map is observable, so if the skin sets a value in the map then the control will be informed and can update the value of the read-only property itself. The Control Class The property in the control class might be defined like this: private final ReadOnlyDoubleWrapper myReadOnly =    new ReadOnlyDoubleWrapper();public final ReadOnlyDoubleProperty myReadOnlyProperty() {     return myReadOnly.getReadOnlyProperty(); }public final Double getMyReadOnly() {     return myReadOnly.get(); } To update the property the control class registers a listener with its own property map and listens for changes to the property called “myReadOnly”: getProperties().addListener(new MapChangeListener() {   public void onChanged(Change c) {     if (c.wasAdded() && "myReadOnly".equals(c.getKey())) {       if (c.getValueAdded() instanceof Number) {         myReadOnly.set((Double) c.getValueAdded());       }       getProperties().remove("myReadOnly");     }   } }); Important: make sure to use a unique name for the property key or you might end up with naming conflicts. It is good practise to prefix the name with the package name of your control, e.g. com.myframework.myReadOnly. The Skin Class Now the skin class can update the property by setting the property value in the control’s property map: getSkinnable().getProperties().put("myReadOnly", 42);Reference: JavaFX Tip 11: Updating Read-Only Properties from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....

JavaFX Tip 10: Custom Composite Controls

Writing custom controls in JavaFX is a simple and straight forward process. A control class is needed for controlling the state of the control (hence the name). A skin class is needed for the apperance of the control. And more often than not a CSS file for customizing the apperance. A common approach for controls is to hide the nodes they are using inside their skin class. The TextField control for example uses two instances of javafx.scene.text.Text. One for the regular text, one for the prompt text. These nodes are not accessible via the TextField API. If you want to get a reference to them you would need to call the lookup(String) method on Node. So far so good. It is actually hard to think of use cases where you would actually need access to the Text nodes. But… It becomes a whole different story if you develop complex custom controls. The FlexGanttFX Gantt charting framework is one example. The GanttChart control consists of many other complex controls and following the “separation of concerns” principle these controls carry all those methods and properties that are relevant for them to work properly. If these controls were hidden inside the skin of the Gantt chart then there would be no way to access them and the Gantt chart control would need to implement a whole buch of delegation methods. This would completely clutter the Gantt chart API. For this reason the GanttChart class does provide accessor methods to its child controls and even factory methods for creating the child nodes. Example The following screenshot shows a new control I am currently working on for the ControlsFX project. I am calling it ListSelectionView and it features two ListView instances. The user can move items from one list to another by either double clicking on them or by using the buttons in the middle.  List views are complex controls. They have their own data and selection models, their own cell factories, they fire events, and so on and so on. All of these things we might want to either customize or listen to. Something hard to do if the views are hidden in the skin class. The solution is to create the list views inside the control class via protected factory methods and to provide accessor methods. The following code fragment shows the pattern that can be used: public class ListSelectionView<T> extends Control {    private ListView<T> sourceListView;     private ListView<T> targetListView;    public ListSelectionView() {         sourceListView = createSourceListView();         targetListView = createTargetListView();     }    protected ListView<T> createSourceListView() {         return new ListView<>();     }    protected ListView<T> createTargetListView() {         return new ListView<>();     }    public final ListView<T> getSourceListView() {         return sourceListView;     }    public final ListView<T> getTargetListView() {         return targetListView;     } } The factory methods can be used to create standard ListView instances and configure them right there or to return already existing ListView specializations. A company called ACME might already provide a standard set of controls (that implement the company’s marketing concept). Then the factory methods might return a control called ACMEListView.Reference: JavaFX Tip 10: Custom Composite Controls from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....

Testing Love and Polyamorous TDD

The rigor and quality of testing in the current software development world leaves a lot to be desired. Additionally, it feels like we are living in the dark ages with mysterious edicts about the “right” way to test being delivered by an anointed few vocal prophets with little or no effort being given to education of the general populace about why it is “right”, instead spending effort evangelizing. I use the religious metaphor because to me it seems a very large amount of the rhetoric is intended to sway people to follow a particular set of ceremonies without doing a good job of explaining the underpinnings and why these ceremonies have value. I read with interest an post by David Heinmeier Hansson titled TDD is dead. Long live testing that pretty much sums up my opinion of the current state of affairs in this regard. A number of zealots proclaiming TDD to be the “one true way”, but not a lot of evidence that this is actually true. Yes, Test Driven Development (TDD) is a good practice, but it is NOT necessarily superior to: integration testing, penetration testing, operational readiness testing, disaster recovery testing, and any of a large number of other validation activities that should be a part of a software delivery practice. Embracing and developing a passion for all manner of testing are important parts of being a well rounded, enlightened, and effective software developer. Since I have this perspective, I’m particularly jostled by the perspective outlined by Bob Martin’s treatise on Monogamous TDD is the one true way. In direct reaction to this post, I propose we start to look at software validation as an entire spectrum of practices that we’ll just call Polyamorous TDD. The core tenets of this approach are that openness, communication, the value of people, and defining quality are more important than rigorous adherence to specific practices. Furthermore, we should promote the idea that the best way to do things often depends on what particular group of people are doing them (note, Agile folks, does this sound familiar?) I chose the term Polyamory instead of Polygamy or Monogamy for the following reasons:It implies there are multiple “correct” ways to test your code, but you are not necessarily married to any one, or even a specific group of them It further suggests that testing is about openness and loving your code instead of adhering to some sort of contract On a more subtle level, it reenforces the notion that acceptance, openness, and communication are valued over strict adherence to a particular practice or set of practices.All this is an attempt to promote the idea that It’s more important that we come together to build understanding about the values provided by better validating our code than to convert people to the particular practice that works for us individually. To build this understanding, we need to more actively embrace new ideas, explore them, and have open lines of communication that are free of drama and contention. This will not happen if we cannot openly admit the notion that there is more than one “right” way to do things and we keep preaching the same tired story that many of us have already heard and have frankly progressed beyond. It’s OK to be passionate about a particular viewpoint, but we still need to be respectful and check our egos at the door when it comes to this topic. As a final tangental reference regarding Uncle Bob’s apparent redefinition of the word fundamentalism in his post. As far as I can see the definition he chose to use was never actually known to be used for this. While I understand what he was trying to say, he was just wrong and DHH’s use of the word based on the definition I’ve seen is still very apt. From the dictionary: 1 a often capitalized : a movement in 20th century Protestantism emphasizing the literally interpreted Bible as fundamental to Christian life and teaching b : the beliefs of this movement c : adherence to such beliefs 2 : a movement or attitude stressing strict and literal adherence to a set of basic principles <Islamic fundamentalism> <political fundamentalism> Uncle Bob, please try to be careful when rebuffing folks on improper word usage and try not to invent new definitions of words, especially when you’re in a position of perceived authority in our little world of software development. Express your opinion or facts and be careful when you state an opinion as if it where a fact, it only lends to confusion and misunderstanding.Reference: Testing Love and Polyamorous TDD from our JCG partner Mike Mainguy at the mike.mainguy blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: