Featured FREE Whitepapers

What's New Here?

software-development-2-logo

9 Differences between TCP and UDP Protocol – Java Network Interview Question

TCP and UDP are two transport layer protocols, which are extensively used in internet for transmitting data between one host to another. Good knowledge of how TCP and UDP works is essential for any programmer. That’s why differences between TCP and UDP is a popular Java programming interview question. I have seen this question many times on various Java interviews , especially for server side Java developer positions. Since FIX (Financial Information Exchange) protocol is also a TCP based protocol, several investment banks, hedge funds, and exchange solution providers look for Java developers with good knowledge of TCP and UDP. Writing FIX engines and server side components for high speed electronic trading platforms requires capable developers with a solid understanding of the fundamentals including data structure, algorithms and networking. By the way, use of TCP and UDP is not limited to one area, it’s at the heart of internet. The protocol which is core of internet, HTTP is based on TCP. One more reason, why a Java developer should understand these two protocols in detail is that Java is extensively used to write multi-threaded, concurrent and scalable servers. Java also provides a rich Socket programming API for both TCP and UDP based communication. In this article, we will learn the key differences between the TCP and UDP protocols. To start with, TCP stands for Transmission Control Protocol and UDP stands for User Datagram Protocol, and both are used extensively to build Internet applications. Differences between TCP vs UDP Protocol I love to compare two things on different points, this not only makes them easy to compare but also makes it easy to remember differences. When we compare TCP to UDP, we learn difference in how both TCP and UDP works, we learn which one provides reliable and guaranteed delivery and which doesn’t. Which protocol is fast and why, and most importantly when to choose TCP over UDP while building your own distributed application. In this article we will see difference between UDP and TCP in 9 points, e.g. connection set-up, reliability, ordering, speed, overhead, header size, congestion control, application, different protocols based upon TCP and UDP and how they transfer data.Connection oriented vs Connection lessThe first and foremost difference between them is that TCP is a connection oriented protocol while UDP is a connection-less protocol. This means  a connection is established between client and server, before they can send data over TCP. Connection establishment process is also known as TCP hand shaking where control messages are interchanged between client and server. The image here describes the process of a TCP handshake, where control messages are exchanged between client and server. Client, which is the initiator of TCP connection, sends a SYN message to the server, which is listening on a TCP port. Server receives and sends a SYN-ACK message, which is received by client again and responded using ACK. Once the server receive this ACK message,  the TCP connection is established and ready for data transmission. On the other hand, UDP is a connection less protocol, and point to point connection is not established before sending messages. That’s the reason why, UDP is more suitable for multicast distribution of message, one to many distribution of data in single transmission.ReliabilityTCP provides delivery guarantee, which means a message sent using TCP protocol is guaranteed to be delivered to the client. If a message is lost in transit then it is recovered using resending, which is handled by the TCP protocol itself. On the other hand, UDP is unreliable, it doesn’t provide any delivery guarantee. A datagram package may be lost in transit. That’s why UDP is not suitable for programs which require guaranteed delivery.OrderingApart from delivery guarantee, TCP also guarantees order of message. The messages will be delivered to the client in the same order that the server has sent them, though its possible they may reach out of order to the other end of the network. TCP protocol will do all the sequencing and ordering for you. UDP doesn’t provide any ordering or sequencing guarantee. Datagram packets may arrive in any order. That’s why TCP is suitable for application which need delivery in sequenced manner, though there are UDP based protocols as well which provide ordering and reliability by using sequence number and redelivery e.g. TIBCO Rendezvous, which is actually a UDP based application.Data BoundaryTCP does not preserve data boundary, UDP does. In Transmission control protocol, data is sent as a byte stream, and no distinguishing indications are transmitted to signal message (segment) boundaries. On UDP, Packets are sent individually and are checked for integrity only if they arrived. Packets have definite boundaries which are honoured upon receipt, meaning a read operation at the receiver socket will yield an entire message as it was originally sent. Though TCP will also deliver complete message after assembling all bytes. Messages are stored on TCP buffers before sending to make optimum use of network bandwidth.SpeedIn one word, TCP is slow and UDP is fast. Since TCP does has to create connection, ensure guaranteed and ordered delivery, it does lot more than UDP. This costs TCP in terms of speed, that’s why UDP is more suitable where speed is a concern, for example online video streaming, telecast or online multi player games.Heavy weight vs Light weightBecause of the overhead mentioned above, Transmission control protocol is considered as heavy weight as compared to light weight UDP protocol. The simple mantra of UDP is to deliver messages without bearing any overhead of creating a connection and guaranteeing delivery or order guarantee. This is also reflected in their header sizes, which is used to carry meta data.Header sizeTCP has a bigger header than UDP. Usual header size of a TCP packet is 20 bytes which is more than double of 8 bytes, header size of UDP datagram packet. TCP header contains Sequence Number, Ack number, Data offset, Reserved, Control bit, Window, Urgent Pointer, Options, Padding, Check Sum, Source port, and Destination port. While UDP header only contains Length, Source port, Destination port, and Check Sum. Here is how TCP and UDP header looks like : Congestion or Flow controlTCP does Flow Control. TCP requires three packets to set up a socket connection, before any user data can be sent. TCP handles reliability and congestion control. On the other hand, UDP does not have an option for flow control.Usage and applicationWhere does TCP and UDP are used in internet? After knowing key differences between TCP and UDP, we can easily conclude, which situation suits them. Since TCP provides delivery and sequencing guaranty, it is best suited for applications that require high reliability, and transmission time is relatively less critical. While UDP is more suitable for applications that need fast, efficient transmission, such as games. UDP’s stateless nature is also useful for servers that answer small queries from huge numbers of clients. In practice, TCP is used in the finance domain e.g. the FIX protocol is a TCP based protocol, while UDP is used heavily in gaming and entertainment sites.TCP and UDP based ProtocolsOne of the best example of TCP based higher end protocol is HTTP and HTTPS, which is everywhere on internet. In fact most of the common protocols you are familiar of e.g. Telnet, FTP and SMTP all are based over Transmission Control Protocol. UDP don’t have anything as popular as HTTP but it is also extensively used in protocol like DHCP and DNS. Some of the other protocols which are based on the User Datagram protocol are Simple Network Management Protocol (SNMP), TFTP, BOOTP and NFS (early versions). Always remember to mention that TCP is connection oriented, reliable, slow, provides guaranteed delivery and preservers order of messages, while UDP is connection less, unreliable, no ordering guarantee, but a fast protocol. TCP overhead is also much higher than UDP, as it transmits more meta data per packet than UDP. It’s worth mentioning that header size of Transmission control protocol is 20 bytes, compared to 8 bytes header of User Datagram protocol. Use TCP, if you can’t afford to lose any message, while UDP is better for high speed data transmission, where loss of single packet is acceptable e.g. video streaming or online multi player games. While working in TCP/UDP based application on Linux, it’s also good to remember basic networking commands e.g. telnet and netstat, they help tremendously to debug or troubleshoot any connection issue.Reference: 9 Difference between TCP and UDP Protocol – Java Network Interview Question from our JCG partner Javin Paul at the Javarevisited blog....
java-logo

Java Keystore Tutorial

                     Table Of Contents1. Introduction 2. SSL and how it works 3. Private Keys 4. Public Certificates 5. Root Certificates 6. Certificate Authorities 7. Certificate Chain 8. Keystore using Java keytool 9. Keystore Commands 10. Configure SSL using Keystores and Self Signed Certificates on Apache Tomcat  1. Introduction Who of us didn’t visit ebay, amazon to buy anything or his personal bank account to check it. Do you think that those sites are secure enough to put your personal data like (credit card number or bank account number, etc.,)? Most of those sites use the Socket Layer (SSL) protocol to secure their Internet applications. SSL allows the data from a client, such as a Web browser, to be encrypted prior to transmission so that someone trying to sniff the data is unable to decipher it. Many Java application servers and Web servers support the use of keystores for SSL configuration. If you’re building secure Java programs, learning to build a keystore is the first step. 2. SSL and how it works A HTTP-based SSL connection is always initiated by the client using a URL starting with https:// instead of with http://. At the beginning of an SSL session, an SSL handshake is performed. This handshake produces the cryptographic parameters of the session. A simplified overview of how the SSL handshake is processed is shown in the diagram below.This is in short how it works:A browser requests a secure page (usually https://). The web server sends its public key with its certificate. The browser checks that the certificate was issued by a trusted party (usually a trusted root CA), that the certificate is still valid and that the certificate is related to the site contacted. The browser then uses the public key, to encrypt a random symmetric encryption key and sends it to the server with the encrypted URL required as well as other encrypted http data. The web server decrypts the symmetric encryption key using its private key and uses the symmetric key to decrypt the URL and http data. The web server sends back the requested html document and http data encrypted with the symmetric key. The browser decrypts the http data and html document using the symmetric key and displays the information.The world of SSL has, essentially, three types of certificates: private keys, public keys (also called public certificates or site certificates), and root certificates. 3. Private Keys The private key contains the identity information of the server, along with a key value. It should keep this key safe and protected by password because it’s used to negotiate the hash during the handshake. It can be used by someone to decrypt the traffic and get your personal information. It like leaving your house key in the door lock. 4. Public Certificates The public certificate (public key) is the portion that is presented to a client, it likes your personal passport when you show in the Airport. The public certificate, tightly associated to the private key, is created from the private key using a Certificate Signing Request (CSR). After you create a private key, you create a CSR, which is sent to your Certificate Authority (CA). The CA returns a signed certificate, which has information about the server identity and about the CA. 5. Root Certificates Root CA Certificate is a CA Certificate which is simply a Self-signed Certificate. This certificate represents a entity which issues certificate and is known as Certificate Authority or the CA such as VeriSign, Thawte, etc. 6. Certificate Authorities Companies who will sign certificates for you such as VeriSign, Thawte, Commodo, GetTrust. Also, many companies and institutions act as their own CA, either by building a complete implementation from scratch, or by using an open source option, such as OpenSSL. 7. Certificate Chain When a server and client establish an SSL connection, a certificate is presented to the client; the client should determine whether to trust this certificate, a process called the certificate chain. The client examines the issuer of a certificate, searches its list of trusted root certificates, and compares the issuer on the presented certificate to the subjects of the trusted certificates. If a match is found, the connection proceeds. If not, the Web browsers may pop up a dialog box, warning you that it cannot trust the certificate and offering the option to trust the certificate. 8. Keystore using Java keytool Java Keytool is a key and certificate management utility. It allows users to manage their own public/private key pairs and certificates. Java Keytool stores the keys and certificates in what is called a keystore. It protects private keys with a password. Each certificate in a Java keystore is associated with a unique alias. When creating a Java keystore you will first create the .jks file that will initially only contain the private key, then generate a CSR. Then you will import the certificate to the keystore including any root certificates. 9. Keystore Commands Create Keystore, Keys and Certificate RequestsGenerate a Java keystore and key pair keytool -genkey -alias mydomain -keyalg RSA -keystore keystore.jks -storepass passwordGenerate a certificate signing request (CSR) for an existing Java keystore keytool -certreq -alias mydomain -keystore keystore.jks -storepass password -file mydomain.csr Generate a keystore and self-signed certificate keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass password -validity 360Import CertificatesImport a root or intermediate CA certificate to an existing Java keystore keytool -import -trustcacerts -alias root -file Thawte.crt -keystore keystore.jks -storepass passwordImport a signed primary certificate to an existing Java keystore keytool -import -trustcacerts -alias mydomain -file mydomain.crt -keystore keystore.jks -storepass passwordExport CertificatesExport a certificate from a keystore keytool -export -alias mydomain -file mydomain.crt -keystore keystore.jks -storepass passwordCheck/List/View CertificatesCheck a stand-alone certificate keytool -printcert -v -file mydomain.crtCheck which certificates are in a Java keystore keytool -list -v -keystore keystore.jks -storepass passwordCheck a particular keystore entry using an alias keytool -list -v -keystore keystore.jks -storepass password -alias mydomainDelete CertificatesDelete a certificate from a Java Keytool keystore keytool -delete -alias mydomain -keystore keystore.jks -storepass passwordChange PasswordsChange a Java keystore password keytool -storepasswd -new new_storepass -keystore keystore.jks -storepass passwordChange a private key password keytool -keypasswd -alias client -keypass old_password -new new_password -keystore client.jks -storepass password10. Configure SSL using Keystores and Self Signed Certificates on Apache TomcatGenerate new keystore and self-signed certificateusing this command, you will prompt to enter specific information such as user name, organization unit, company and location. keytool -genkey -alias tomcat -keyalg RSA -keystore /home/ashraf/Desktop/JavaCodeGeek/keystore.jks -validity 360You can list the certificate details you just created using this command keytool -list -keystore /home/ashraf/Desktop/JavaCodeGeek/keystore.jksDownload Tomcat 7 Configure Tomcat’s server to support for SSL or https connection. Adding a connector element in Tomcat\conf\server.xml <Connector port="8443" maxThreads="150" scheme="https" secure="true" SSLEnabled="true" keystoreFile="/home/ashraf/Desktop/JavaCodeGeek/.keystore" keystorePass="password" clientAuth="false" keyAlias="tomcat" sslProtocol="TLS" />Start Tomcat and go tohttps://localhost:8443/, you will find the following security issue where the browser will present untrusted error messages. In the case of e-commerce, such error messages result in immediate lack of confidence in the website and organizations risk losing confidence and business from the majority of consumers, that's normal as your certificate isn't signed yet by CA such as Thawte or Verisign who will verify the identity of the requester and issue a signed certificate.You can click Proceed anyway till you receive you signed certificate....
software-development-2-logo

Enterprise Integration Patterns (EIP) Revisited in 2014

Today, I had a talk about “Enterprise Integration Patterns (EIP) Revisited in 2014″ at Java Forum Stuttgart 2014, a great conference for developers and architects with 1600 attendees. Enterprise Integration Patterns Data exchanges between companies increase a lot. Hence, the number of applications which must be integrated increases, too. The emergence of service-oriented architectures and cloud computing boost this even more. The realization of these integration scenarios is a complex and time-consuming task because different applications and services do not use the same concepts, interfaces, data formats and technologies. Originated and published over ten years ago by Gregor Hohpe and Bobby Woolf,  Enteprise Integration Patterns (EIP) became the world wide de facto standard for describing integration problems. They offer a standardized way to split huge, complex integration scenarios into smaller recurring problems. These patterns appear in almost every integration project. Most developers already have used some of these patterns such as the filter, splitter or content-based-router – some of them without being aware of using EIPs. Today, EIPs are still used to reduce efforts and complexity a lot. This session revisits EIPs and gives an overview about the status quo. Open Source, Apache Camel, Talend ESB, JBoss, WSO2, TIBCO BusinessWorks, StreamBase, IBM WebSphere, Oracle, … Fortunately, EIPs offer more possibilities than just be used for modelling integration problems in a standardized way. Several frameworks and tools already implement these patterns. The developer does not have to implement EIPs on his own. Therefore, the end of the session shows different frameworks and tools available, which can be used for modelling and implementing complex integration scenarios by using the EIPs. SlidesReference: Enterprise Integration Patterns (EIP) Revisited in 2014 from our JCG partner Kai Waehner at the Blog about Java EE / SOA / Cloud Computing blog....
enterprise-java-logo

Writing Tests for Data Access Code – Don’t Test the Framework

When we write tests to our data access code, should we test every method of its public API? It sounds natural at first. After all, if we don’t test everything, how can we know that our code works as expected? That question provides us an important clue: Our code.       We should write tests only to our own code. What Is Our Own Code? It is sometimes hard to identify the code which we should test. The reason for this is that our data access code is integrated tightly with the library or framework which we use when we save information to the used data storage or read information from it. For example, if we want to create a Spring Data JPA repository which provides CRUD operations to Todo objects, we should create an interface which extends the CrudRepository interface. The source code of the TodoRepository interface looks as follows: import org.springframework.data.repository.CrudRepository;public TodoRepository extends CrudRepository<Todo, Long> {} Even though we haven’t added any methods to our repository interface, the CrudRepository interface declares many methods which are available to the classes that use our repository interface. These methods are not our code because they are implemented and maintained by the Spring Data team. We only use them. On the other hand, if we add a custom query method to our repository, the situation changes. Let’s assume that we have to find all todo entries whose title is equal to the given search term. After we have added this query method to our repository interface, its source code looks as follows: import org.springframework.data.repository.CrudRepository; import org.springframework.data.repository.query.Param;public TodoRepository extends CrudRepository<Todo, Long> {@Query("SELECT t FROM Todo t where t.title=:searchTerm") public List<Todo> search(@Param("searchTerm") String searchTerm) } It would be easy to claim that this method is our own code and that is why we should test it. However, the truth is a bit more complex. Even though the JPQL query was written by us, Spring Data JPA provides the code which passes that query forward to the used JPA provider. And still, I think that this query method is our own code because the most essential part of it was written by us. If we want to identify our own data access code, we have to locate the essential part of each method. If this part was written by us, we should treat that that method as our own code. This is all pretty obvious, and the more interesting question is: Should We Test It? Our repository interface provides two kinds of methods to the classes which use it:It provides methods that are declared by the CrudRepository interface. It provides a query method that was written by us.Should we write integration tests to the TodoRepository interface and test all of these methods? No. We should not do this becauseThe methods declared by the CrudRepository interface are not our own code. This code is written and maintained by the Spring Data team, and they have ensured that it works. If we don’t trust that their code works, we should not use it. Our application probably has many repository interfaces which extend the CrudRepository interface. If we decide to write tests to the methods declared by the CrudRepository interface, we have to write these tests to all repositories. If we choose this path, we will spend a lot of time writing tests to someone else’s code, and frankly, it is not worth it. Our own code might be so simple that writing tests to our repository makes no sense.In other words, we should concentrate on finding an answer to this question: Should we write integration tests to our repository methods (methods which were written by us), or should we just write end-to-end tests? The answer to this question depends from the complexity of our repository method. I am aware that complexity is a pretty vague word, and that is why we need a some kind of guideline that will help us to find the best way of testing our repository methods. One way to make this decision is to think about the amount of work which is required to test the every possible scenario. This makes sense because:It takes less work to write integration tests to a single repository method than to write the same tests to the feature that uses the repository method. We have to write end-to-end anyway.That is why it makes sense to minimize our investment (time) and maximize our profits (test coverage). We can do this by following these rules:If we can test all possible scenarios by writing only a few tests, we shouldn’t waste our time for writing integration tests to our repository method. We should write end-to-end tests which ensure that the feature is working as expected. If we need to write more than a few tests, we should write integration tests to our repository method, and write only a few end-to-end tests (smoke tests).Summary This blog post has taught us two things:We should not waste our time for writing tests to a data access framework (or library) written by someone else. If we don’t trust that framework (or library), we should not use it. Sometimes we should not write integration tests to our data access code either. If the tested code is simple enough (we can cover all situations by writing a few tests), we should test it by writing end-to-end tests.Reference: Writing Tests for Data Access Code – Don’t Test the Framework from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
software-development-2-logo

Converting XML to CSV using XSLT 1.0

This post shows you how to convert a simple XML file to CSV using XSLT. Consider the following sample XML:                 <library> <book> <author>Dan Simmons</author> <title>Hyperion</title> <publishDate>1989</publishDate> </book> <book> <author>Douglas Adams</author> <title>The Hitchhiker's Guide to the Galaxy</title> <publishDate>1979</publishDate> </book> </library> This is the desired CSV output: author,title,publishDate Dan Simmons,Hyperion,1989 Douglas Adams,The Hitchhiker's Guide to the Galaxy,1979 The following XSL Style Sheet (compatible with XSLT 1.0) can be used to transform the XML into CSV. It is quite generic and can easily be configured to handle different xml elements by changing the list of fields defined ar the beginning. <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text" /><xsl:variable name="delimiter" select="','" /><!-- define an array containing the fields we are interested in --> <xsl:variable name="fieldArray"> <field>author</field> <field>title</field> <field>publishDate</field> </xsl:variable> <xsl:param name="fields" select="document('')/*/xsl:variable[@name='fieldArray']/*" /><xsl:template match="/"><!-- output the header row --> <xsl:for-each select="$fields"> <xsl:if test="position() != 1"> <xsl:value-of select="$delimiter"/> </xsl:if> <xsl:value-of select="." /> </xsl:for-each><!-- output newline --> <xsl:text> </xsl:text><xsl:apply-templates select="library/book"/> </xsl:template><xsl:template match="book"> <xsl:variable name="currNode" select="." /><!-- output the data row --> <!-- loop over the field names and find the value of each one in the xml --> <xsl:for-each select="$fields"> <xsl:if test="position() != 1"> <xsl:value-of select="$delimiter"/> </xsl:if> <xsl:value-of select="$currNode/*[name() = current()]" /> </xsl:for-each><!-- output newline --> <xsl:text> </xsl:text> </xsl:template> </xsl:stylesheet> Let’s try it out: $ xsltproc xml2csv.xsl books.xml author,title,publishDate Dan Simmons,Hyperion,1989 Douglas Adams,The Hitchhiker's Guide to the Galaxy,1979Reference: Converting XML to CSV using XSLT 1.0 from our JCG partner Fahd Shariff at the fahd.blog blog....
javafx-logo

JavaFX Tip 11: Updating Read-Only Properties

Custom controls often feature “read-only” properties. This means that they can not be set from outside the control, not even from their own skin class. It is often the behaviour of a control that leads to a change of the read-only property. In JavaFX this behaviour can be implemented in the control itself and in the skin. So we sometimes end up with a skin wanting to update a read-only property of the control. How can this be done?           Backdoor: Property Map The solution is quite simple: use the properties map of the control as a backdoor to the control class. The properties map is observable, so if the skin sets a value in the map then the control will be informed and can update the value of the read-only property itself. The Control Class The property in the control class might be defined like this: private final ReadOnlyDoubleWrapper myReadOnly =    new ReadOnlyDoubleWrapper();public final ReadOnlyDoubleProperty myReadOnlyProperty() {     return myReadOnly.getReadOnlyProperty(); }public final Double getMyReadOnly() {     return myReadOnly.get(); } To update the property the control class registers a listener with its own property map and listens for changes to the property called “myReadOnly”: getProperties().addListener(new MapChangeListener() {   public void onChanged(Change c) {     if (c.wasAdded() && "myReadOnly".equals(c.getKey())) {       if (c.getValueAdded() instanceof Number) {         myReadOnly.set((Double) c.getValueAdded());       }       getProperties().remove("myReadOnly");     }   } }); Important: make sure to use a unique name for the property key or you might end up with naming conflicts. It is good practise to prefix the name with the package name of your control, e.g. com.myframework.myReadOnly. The Skin Class Now the skin class can update the property by setting the property value in the control’s property map: getSkinnable().getProperties().put("myReadOnly", 42);Reference: JavaFX Tip 11: Updating Read-Only Properties from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
javafx-logo

JavaFX Tip 10: Custom Composite Controls

Writing custom controls in JavaFX is a simple and straight forward process. A control class is needed for controlling the state of the control (hence the name). A skin class is needed for the apperance of the control. And more often than not a CSS file for customizing the apperance. A common approach for controls is to hide the nodes they are using inside their skin class. The TextField control for example uses two instances of javafx.scene.text.Text. One for the regular text, one for the prompt text. These nodes are not accessible via the TextField API. If you want to get a reference to them you would need to call the lookup(String) method on Node. So far so good. It is actually hard to think of use cases where you would actually need access to the Text nodes. But… It becomes a whole different story if you develop complex custom controls. The FlexGanttFX Gantt charting framework is one example. The GanttChart control consists of many other complex controls and following the “separation of concerns” principle these controls carry all those methods and properties that are relevant for them to work properly. If these controls were hidden inside the skin of the Gantt chart then there would be no way to access them and the Gantt chart control would need to implement a whole buch of delegation methods. This would completely clutter the Gantt chart API. For this reason the GanttChart class does provide accessor methods to its child controls and even factory methods for creating the child nodes. Example The following screenshot shows a new control I am currently working on for the ControlsFX project. I am calling it ListSelectionView and it features two ListView instances. The user can move items from one list to another by either double clicking on them or by using the buttons in the middle.  List views are complex controls. They have their own data and selection models, their own cell factories, they fire events, and so on and so on. All of these things we might want to either customize or listen to. Something hard to do if the views are hidden in the skin class. The solution is to create the list views inside the control class via protected factory methods and to provide accessor methods. The following code fragment shows the pattern that can be used: public class ListSelectionView<T> extends Control {    private ListView<T> sourceListView;     private ListView<T> targetListView;    public ListSelectionView() {         sourceListView = createSourceListView();         targetListView = createTargetListView();     }    protected ListView<T> createSourceListView() {         return new ListView<>();     }    protected ListView<T> createTargetListView() {         return new ListView<>();     }    public final ListView<T> getSourceListView() {         return sourceListView;     }    public final ListView<T> getTargetListView() {         return targetListView;     } } The factory methods can be used to create standard ListView instances and configure them right there or to return already existing ListView specializations. A company called ACME might already provide a standard set of controls (that implement the company’s marketing concept). Then the factory methods might return a control called ACMEListView.Reference: JavaFX Tip 10: Custom Composite Controls from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
software-development-2-logo

Testing Love and Polyamorous TDD

The rigor and quality of testing in the current software development world leaves a lot to be desired. Additionally, it feels like we are living in the dark ages with mysterious edicts about the “right” way to test being delivered by an anointed few vocal prophets with little or no effort being given to education of the general populace about why it is “right”, instead spending effort evangelizing. I use the religious metaphor because to me it seems a very large amount of the rhetoric is intended to sway people to follow a particular set of ceremonies without doing a good job of explaining the underpinnings and why these ceremonies have value. I read with interest an post by David Heinmeier Hansson titled TDD is dead. Long live testing that pretty much sums up my opinion of the current state of affairs in this regard. A number of zealots proclaiming TDD to be the “one true way”, but not a lot of evidence that this is actually true. Yes, Test Driven Development (TDD) is a good practice, but it is NOT necessarily superior to: integration testing, penetration testing, operational readiness testing, disaster recovery testing, and any of a large number of other validation activities that should be a part of a software delivery practice. Embracing and developing a passion for all manner of testing are important parts of being a well rounded, enlightened, and effective software developer. Since I have this perspective, I’m particularly jostled by the perspective outlined by Bob Martin’s treatise on Monogamous TDD is the one true way. In direct reaction to this post, I propose we start to look at software validation as an entire spectrum of practices that we’ll just call Polyamorous TDD. The core tenets of this approach are that openness, communication, the value of people, and defining quality are more important than rigorous adherence to specific practices. Furthermore, we should promote the idea that the best way to do things often depends on what particular group of people are doing them (note, Agile folks, does this sound familiar?) I chose the term Polyamory instead of Polygamy or Monogamy for the following reasons:It implies there are multiple “correct” ways to test your code, but you are not necessarily married to any one, or even a specific group of them It further suggests that testing is about openness and loving your code instead of adhering to some sort of contract On a more subtle level, it reenforces the notion that acceptance, openness, and communication are valued over strict adherence to a particular practice or set of practices.All this is an attempt to promote the idea that It’s more important that we come together to build understanding about the values provided by better validating our code than to convert people to the particular practice that works for us individually. To build this understanding, we need to more actively embrace new ideas, explore them, and have open lines of communication that are free of drama and contention. This will not happen if we cannot openly admit the notion that there is more than one “right” way to do things and we keep preaching the same tired story that many of us have already heard and have frankly progressed beyond. It’s OK to be passionate about a particular viewpoint, but we still need to be respectful and check our egos at the door when it comes to this topic. As a final tangental reference regarding Uncle Bob’s apparent redefinition of the word fundamentalism in his post. As far as I can see the definition he chose to use was never actually known to be used for this. While I understand what he was trying to say, he was just wrong and DHH’s use of the word based on the definition I’ve seen is still very apt. From the dictionary: 1 a often capitalized : a movement in 20th century Protestantism emphasizing the literally interpreted Bible as fundamental to Christian life and teaching b : the beliefs of this movement c : adherence to such beliefs 2 : a movement or attitude stressing strict and literal adherence to a set of basic principles <Islamic fundamentalism> <political fundamentalism> Uncle Bob, please try to be careful when rebuffing folks on improper word usage and try not to invent new definitions of words, especially when you’re in a position of perceived authority in our little world of software development. Express your opinion or facts and be careful when you state an opinion as if it where a fact, it only lends to confusion and misunderstanding.Reference: Testing Love and Polyamorous TDD from our JCG partner Mike Mainguy at the mike.mainguy blog....
rabbitmq-logo

RabbitMQ in Multiple AWS Availability Zones

When working with AWS, in order to have a highly-available setup, once must have instances in more than one availability zone (AZ ≈ data center). If one AZ dies (which may happen), your application should continue serving requests. It’s simple to setup your application nodes in multiple AZ (if they are properly written to be stateless), but it’s trickier for databases, message queues and everything that has state. So let’s see how to configure RabbitMQ. The first steps are not relevant only to RabbitMQ, but to any persistent data solution. First (no matter whether using CloudFormation or manual setup), you must:  Have a VPC. It might be possible without a VPC, but I can’t guarnatee that, especially the DNS hostnames as discussed below Declare private subnets (for each AZ) Declare the RabbitMQ autoscaling group (recommended to have one) to span multiple AZs, using: "AvailabilityZones" : { "Fn::GetAZs" : { "Ref": "AWS::Region" } }Declare the RabbitMQ autoscaling group to span multiple subnets using the VPCZoneIdentifier property Declare the LoadBalancer in front of your RabbitMQ nodes (that is the easiest way to ensure even distribution of load to your Rabbit cluster) to span all the subnets Declare LoadBalancer to be "CrossZone": trueThen comes the specific RabbitMQ configuration. Generally, you have two options:RabbitMQ Clustering RabbitMQ FederationClustering is not recommended in case of WAN, but the connection between availability zones can be viewed (maybe a bit optimistically) as a LAN (This detailed post assumes otherwise, but this thread hints that using a cluster over multiple AZ is fine). With federation, you declare your exchanges to send all messages they receive to another node’s exchange. This is pretty useful in a WAN, where network disconnects are common and speed is not so important. But it may still be applicable in a multi-AZ scenario, so it’s worth investigating. Here is an example, with exact commands to execute, of how to achieve that, using the federation plugin. The tricky part with federation is auto-scaling – whenever you need to add a new node, you should modify (some of) your existing nodes configuration in order to set the new node as their upstream. You may also need to allow other machines to connect as guest to rabbitmq ([{rabbit, [{loopback_users, []}]}] in your rabbitmq conf file), or find a way to configure a custom username/password pair for federation to work. With clustering, it’s a bit different, and in fact simpler to setup. All you have to do is write a script to automatically join a cluster on startup. This might be a shell script or a python script using the AWS SDK. The main steps in such a script (which, yeah, frankly, isn’t that simple), are:Find all running instances in the RabbitMQ autoscaling group (using the AWS API filtering options) If this is the first node (the order is random and doesn’t matter), assume it’s the “seed” node for the cluster and all other nodes will connect to it If this is not the first node, connect to the first node (using rabbitmqctl join_cluster rabbit@{node}), where {node} is the instance private DNS name (available through the SDK) Stop RabbitMQ when doing all configurations, start it after your are doneIn all cases (clustering or federation), RabbitMQ relies on domain names. The easiest way to make it work is to enable DNS hostnames in your VPC: "EnableDnsHostnames": true. There’s a little hack here, when it terms to joining a cluster – the AWS API may return the fully qualified domain name, which includes something like “.eu-west-1.compute.internal” in addition to the ip-xxx-xxx-xxx-xxx part. So when joining the RabbitMQ cluster, you should strip this suffix, otherwise it doesn’t work. The end results should allow for a cluster, where if a node dies and another one is spawned (by the auto-scaling group), the cluster should function properly. Comparing the two approaches with PerfTest yields better throughput for the clustering option – about 1/3 less messages were processed with federation, and also there was a bit higher latency. The tests should be executed from an application node, towards the RabbitMQ ELB (otherwise you are testing just one node). You can get PerfTest and execute it with something like that (where the amqp address is the DNS name of the RabbitMQ load balancer): wget http://www.rabbitmq.com/releases/rabbitmq-java-client/v3.3.4/rabbitmq-java-client-bin-3.3.4.tar.gz tar -xvf rabbitmq-java-client-bin-3.3.4.tar.gz cd rabbitmq-java-client-bin-3.3.4 sudo sh runjava.sh com.rabbitmq.examples.PerfTest -x 10 -y 10 -z 10 -h amqp://internal-foo-RabbitMQEl-1GM6IW33O-1097824.eu-west-1.elb.amazonaws.com:5672 Which of the two approaches you are going to pick up depends on your particular case, but I would generally recommend the clustering option. A bit more performant and a bit easier to setup and to support in a cloud environment, with nodes spawning and dying often.Reference: RabbitMQ in Multiple AWS Availability Zones from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
java-logo

Setting up development environment for GWT

Introduction This is part of series intended to develop cross platform mobile applications in Java. In this blog post we will see what GWT is and set up the development environment for GWT. GWT is an open source development toolkit for developing complex browser based Ajax applications. Using GWT you can develop Rich Internet Applications(RIA) in Java which is then compiled into JavaScript and is cross browser compliant. Some of the advantages of developing web applications in GWT are: Since GWT apps can be developed in Java, you can enjoy all the advantages of developing in Java like auto-complete,Debugging, refactoring, code reuse, polymorphism, over riding, over loading. And Java has large set of tools for development like Eclipse, NetBeans,JUnit and Maven etc which you can use for developing Rich Internet Applications(RIA). Maintaining large JavaScript projects is not easier when compared to Java projects. But you need JavaScript to run Rich Internet Applications in browser. GWT combines both the advantages. You develop the applications in Java and then they are compiled into JavaScript, so you are having best of both. GWT is almost similar to AWT and Swing packages in Java and so has a low learning curve for Java Developers. Supporting several browsers in the market is a difficult tasks. Each browser creates it own set of problems. GWT solves this problem by creating optimized JavaScript code for each browser specifically addressing the issues with that browser. So you can support almost all the major browsers including Android , iPad and iPhone based browsers without worrying about quirks for each browser. Developing UI’s in Java is difficult task compared to other aspects of Java programming. GWT solves it by providing several UI widgets and also you can extend the existing widgets and create your own custom widgets if you wish to. Some of the limitations of GWT are: Since the java code is compiled into JavaScript which runs on the browsers, the JavaScript needs to be enabled on the browsers. The applications will not work if the JavaScript is not enabled on the browser. If you have specialist UI designers who can create HTML pages, this will not work. You may have to implement what ever Designer created again in GWT. Web Pages created by GWT cannot be indexed by search engines since these applications are generated dynamically. I think except the second drawback in the list, others don’t matter much. It is difficult to provide a rich internet application just in HTML. You will need JavaScript to create rich internet applications. Some apps provide a limited version of apps which work if JavaScript is disabled but majority of apps require JavaScript , so you are not the one there. And there is no reason why large number of users will disable JavaScript on their browsers. And there is a work around for indexing by search engines. The index page can be created in html, and the remaining pages can be created in GWT. GWT provides an option to define index page in html format. So the index page can still be indexed by search engines and the other pages are mostly dynamically created data, so they don’t need to come up in the search unless you they are some kind of content management systems(CMS). Like the case with all the frameworks, GWT doesn’t solve all the issues, but it surely makes the java developers more productive developing the web applications, provides cross browser support and works perfectly for complex enterprise web applications. GWT Development Environment Setup We will start setting up the development environment for GWT applications. Java Since you will developing the applications in Java before they are compiled into JavaScript, you need to set up Java development environment. Once Java environment is set up, let us configure the environment for GWT. GWT SDK Download the latest version of GWT SDK from the GWT project site. http://www.gwtproject.org/download.html  Go to the above link and click on ‘Download GWT SDK’ highlighted in the above screen. Then unzip the downloaded GWT SDK to your preferred location on your hard disk and it will look similar to the below screen shot.  You need to install the eclipse plug-in for GWT to develop GWT applications on eclipse easily. To install GWT eclipse plug-in, launch eclipse, go to Help –> Eclipse Marketplace.  Search for GWT in the eclipse market place.  Find out ‘Google Plugin for Eclipse’ and the version number should match the version of the eclipse you are using. If you are using Eclipse Kepler(eclipse 4.3), you need to look for ‘Google Plugin for Eclipse 4.3) and click on ‘Install’.  Accept the license and click on ‘Next’ to continue installation.  It takes some time to download and install the plug-in.  While installing you will get a security warning. Just click on ‘Ok’ to continue the installation.  Restart the eclipse after the installation of plug-in is completed. After restarting the eclipse, you will see the GWT plug-in added to the eclipse tool bar.  And we need to install extensions to the browser you are planning to use for running the GWT app in development mode. We will see later what the development mode is, but for now let us install the plugins for the browser to complete our set up of the development environment. If you launch the app in Dev mode without installing the plug-in, the browser will display a message similar to below. In Internet Explorer:On Chrome:  When you click on Download, On Chrome, you will be redirected to the Chrome extensions page from where you can install the GWT Developer plug-in.Click on ‘FREE’ button to install the plug-in on Chrome browser. On IE, clicking on ‘Download’ button will download a ‘GWTDevPluginSetup.exe’ set up and launching it will install the GWT developer plug-in for IE. Restart the browsers after the GWT developer plug-in is installed. Unfortunately the latest versions of Mozilla Firefox doesn’t support the GWT Developer Plugin. So you can’t work in Development mode on latest version of Firefox, but GWT already provides a super dev mode which doesn’t require installing any plug-in during development. So you can use Firefox in super dev mode during development mode. Conclusion We completed setting up the required development environment for developing applications in GWT. We can start creating GWT applications !!Reference: Setting up development environment for GWT from our JCG partner Venkata Kiran at the Coding square blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books