Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Enterprise Integration Patterns (EIP) Revisited in 2014

Today, I had a talk about “Enterprise Integration Patterns (EIP) Revisited in 2014″ at Java Forum Stuttgart 2014, a great conference for developers and architects with 1600 attendees. Enterprise Integration Patterns Data exchanges between companies increase a lot. Hence, the number of applications which must be integrated increases, too. The emergence of service-oriented architectures and cloud computing boost this even more. The realization of these integration scenarios is a complex and time-consuming task because different applications and services do not use the same concepts, interfaces, data formats and technologies. Originated and published over ten years ago by Gregor Hohpe and Bobby Woolf,  Enteprise Integration Patterns (EIP) became the world wide de facto standard for describing integration problems. They offer a standardized way to split huge, complex integration scenarios into smaller recurring problems. These patterns appear in almost every integration project. Most developers already have used some of these patterns such as the filter, splitter or content-based-router – some of them without being aware of using EIPs. Today, EIPs are still used to reduce efforts and complexity a lot. This session revisits EIPs and gives an overview about the status quo. Open Source, Apache Camel, Talend ESB, JBoss, WSO2, TIBCO BusinessWorks, StreamBase, IBM WebSphere, Oracle, … Fortunately, EIPs offer more possibilities than just be used for modelling integration problems in a standardized way. Several frameworks and tools already implement these patterns. The developer does not have to implement EIPs on his own. Therefore, the end of the session shows different frameworks and tools available, which can be used for modelling and implementing complex integration scenarios by using the EIPs. SlidesReference: Enterprise Integration Patterns (EIP) Revisited in 2014 from our JCG partner Kai Waehner at the Blog about Java EE / SOA / Cloud Computing blog....
enterprise-java-logo

Writing Tests for Data Access Code – Don’t Test the Framework

When we write tests to our data access code, should we test every method of its public API? It sounds natural at first. After all, if we don’t test everything, how can we know that our code works as expected? That question provides us an important clue: Our code.       We should write tests only to our own code. What Is Our Own Code? It is sometimes hard to identify the code which we should test. The reason for this is that our data access code is integrated tightly with the library or framework which we use when we save information to the used data storage or read information from it. For example, if we want to create a Spring Data JPA repository which provides CRUD operations to Todo objects, we should create an interface which extends the CrudRepository interface. The source code of the TodoRepository interface looks as follows: import org.springframework.data.repository.CrudRepository;public TodoRepository extends CrudRepository<Todo, Long> {} Even though we haven’t added any methods to our repository interface, the CrudRepository interface declares many methods which are available to the classes that use our repository interface. These methods are not our code because they are implemented and maintained by the Spring Data team. We only use them. On the other hand, if we add a custom query method to our repository, the situation changes. Let’s assume that we have to find all todo entries whose title is equal to the given search term. After we have added this query method to our repository interface, its source code looks as follows: import org.springframework.data.repository.CrudRepository; import org.springframework.data.repository.query.Param;public TodoRepository extends CrudRepository<Todo, Long> {@Query("SELECT t FROM Todo t where t.title=:searchTerm") public List<Todo> search(@Param("searchTerm") String searchTerm) } It would be easy to claim that this method is our own code and that is why we should test it. However, the truth is a bit more complex. Even though the JPQL query was written by us, Spring Data JPA provides the code which passes that query forward to the used JPA provider. And still, I think that this query method is our own code because the most essential part of it was written by us. If we want to identify our own data access code, we have to locate the essential part of each method. If this part was written by us, we should treat that that method as our own code. This is all pretty obvious, and the more interesting question is: Should We Test It? Our repository interface provides two kinds of methods to the classes which use it:It provides methods that are declared by the CrudRepository interface. It provides a query method that was written by us.Should we write integration tests to the TodoRepository interface and test all of these methods? No. We should not do this becauseThe methods declared by the CrudRepository interface are not our own code. This code is written and maintained by the Spring Data team, and they have ensured that it works. If we don’t trust that their code works, we should not use it. Our application probably has many repository interfaces which extend the CrudRepository interface. If we decide to write tests to the methods declared by the CrudRepository interface, we have to write these tests to all repositories. If we choose this path, we will spend a lot of time writing tests to someone else’s code, and frankly, it is not worth it. Our own code might be so simple that writing tests to our repository makes no sense.In other words, we should concentrate on finding an answer to this question: Should we write integration tests to our repository methods (methods which were written by us), or should we just write end-to-end tests? The answer to this question depends from the complexity of our repository method. I am aware that complexity is a pretty vague word, and that is why we need a some kind of guideline that will help us to find the best way of testing our repository methods. One way to make this decision is to think about the amount of work which is required to test the every possible scenario. This makes sense because:It takes less work to write integration tests to a single repository method than to write the same tests to the feature that uses the repository method. We have to write end-to-end anyway.That is why it makes sense to minimize our investment (time) and maximize our profits (test coverage). We can do this by following these rules:If we can test all possible scenarios by writing only a few tests, we shouldn’t waste our time for writing integration tests to our repository method. We should write end-to-end tests which ensure that the feature is working as expected. If we need to write more than a few tests, we should write integration tests to our repository method, and write only a few end-to-end tests (smoke tests).Summary This blog post has taught us two things:We should not waste our time for writing tests to a data access framework (or library) written by someone else. If we don’t trust that framework (or library), we should not use it. Sometimes we should not write integration tests to our data access code either. If the tested code is simple enough (we can cover all situations by writing a few tests), we should test it by writing end-to-end tests.Reference: Writing Tests for Data Access Code – Don’t Test the Framework from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
software-development-2-logo

Converting XML to CSV using XSLT 1.0

This post shows you how to convert a simple XML file to CSV using XSLT. Consider the following sample XML:                 <library> <book> <author>Dan Simmons</author> <title>Hyperion</title> <publishDate>1989</publishDate> </book> <book> <author>Douglas Adams</author> <title>The Hitchhiker's Guide to the Galaxy</title> <publishDate>1979</publishDate> </book> </library> This is the desired CSV output: author,title,publishDate Dan Simmons,Hyperion,1989 Douglas Adams,The Hitchhiker's Guide to the Galaxy,1979 The following XSL Style Sheet (compatible with XSLT 1.0) can be used to transform the XML into CSV. It is quite generic and can easily be configured to handle different xml elements by changing the list of fields defined ar the beginning. <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text" /><xsl:variable name="delimiter" select="','" /><!-- define an array containing the fields we are interested in --> <xsl:variable name="fieldArray"> <field>author</field> <field>title</field> <field>publishDate</field> </xsl:variable> <xsl:param name="fields" select="document('')/*/xsl:variable[@name='fieldArray']/*" /><xsl:template match="/"><!-- output the header row --> <xsl:for-each select="$fields"> <xsl:if test="position() != 1"> <xsl:value-of select="$delimiter"/> </xsl:if> <xsl:value-of select="." /> </xsl:for-each><!-- output newline --> <xsl:text> </xsl:text><xsl:apply-templates select="library/book"/> </xsl:template><xsl:template match="book"> <xsl:variable name="currNode" select="." /><!-- output the data row --> <!-- loop over the field names and find the value of each one in the xml --> <xsl:for-each select="$fields"> <xsl:if test="position() != 1"> <xsl:value-of select="$delimiter"/> </xsl:if> <xsl:value-of select="$currNode/*[name() = current()]" /> </xsl:for-each><!-- output newline --> <xsl:text> </xsl:text> </xsl:template> </xsl:stylesheet> Let’s try it out: $ xsltproc xml2csv.xsl books.xml author,title,publishDate Dan Simmons,Hyperion,1989 Douglas Adams,The Hitchhiker's Guide to the Galaxy,1979Reference: Converting XML to CSV using XSLT 1.0 from our JCG partner Fahd Shariff at the fahd.blog blog....
javafx-logo

JavaFX Tip 11: Updating Read-Only Properties

Custom controls often feature “read-only” properties. This means that they can not be set from outside the control, not even from their own skin class. It is often the behaviour of a control that leads to a change of the read-only property. In JavaFX this behaviour can be implemented in the control itself and in the skin. So we sometimes end up with a skin wanting to update a read-only property of the control. How can this be done?           Backdoor: Property Map The solution is quite simple: use the properties map of the control as a backdoor to the control class. The properties map is observable, so if the skin sets a value in the map then the control will be informed and can update the value of the read-only property itself. The Control Class The property in the control class might be defined like this: private final ReadOnlyDoubleWrapper myReadOnly =    new ReadOnlyDoubleWrapper();public final ReadOnlyDoubleProperty myReadOnlyProperty() {     return myReadOnly.getReadOnlyProperty(); }public final Double getMyReadOnly() {     return myReadOnly.get(); } To update the property the control class registers a listener with its own property map and listens for changes to the property called “myReadOnly”: getProperties().addListener(new MapChangeListener() {   public void onChanged(Change c) {     if (c.wasAdded() && "myReadOnly".equals(c.getKey())) {       if (c.getValueAdded() instanceof Number) {         myReadOnly.set((Double) c.getValueAdded());       }       getProperties().remove("myReadOnly");     }   } }); Important: make sure to use a unique name for the property key or you might end up with naming conflicts. It is good practise to prefix the name with the package name of your control, e.g. com.myframework.myReadOnly. The Skin Class Now the skin class can update the property by setting the property value in the control’s property map: getSkinnable().getProperties().put("myReadOnly", 42);Reference: JavaFX Tip 11: Updating Read-Only Properties from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
javafx-logo

JavaFX Tip 10: Custom Composite Controls

Writing custom controls in JavaFX is a simple and straight forward process. A control class is needed for controlling the state of the control (hence the name). A skin class is needed for the apperance of the control. And more often than not a CSS file for customizing the apperance. A common approach for controls is to hide the nodes they are using inside their skin class. The TextField control for example uses two instances of javafx.scene.text.Text. One for the regular text, one for the prompt text. These nodes are not accessible via the TextField API. If you want to get a reference to them you would need to call the lookup(String) method on Node. So far so good. It is actually hard to think of use cases where you would actually need access to the Text nodes. But… It becomes a whole different story if you develop complex custom controls. The FlexGanttFX Gantt charting framework is one example. The GanttChart control consists of many other complex controls and following the “separation of concerns” principle these controls carry all those methods and properties that are relevant for them to work properly. If these controls were hidden inside the skin of the Gantt chart then there would be no way to access them and the Gantt chart control would need to implement a whole buch of delegation methods. This would completely clutter the Gantt chart API. For this reason the GanttChart class does provide accessor methods to its child controls and even factory methods for creating the child nodes. Example The following screenshot shows a new control I am currently working on for the ControlsFX project. I am calling it ListSelectionView and it features two ListView instances. The user can move items from one list to another by either double clicking on them or by using the buttons in the middle.  List views are complex controls. They have their own data and selection models, their own cell factories, they fire events, and so on and so on. All of these things we might want to either customize or listen to. Something hard to do if the views are hidden in the skin class. The solution is to create the list views inside the control class via protected factory methods and to provide accessor methods. The following code fragment shows the pattern that can be used: public class ListSelectionView<T> extends Control {    private ListView<T> sourceListView;     private ListView<T> targetListView;    public ListSelectionView() {         sourceListView = createSourceListView();         targetListView = createTargetListView();     }    protected ListView<T> createSourceListView() {         return new ListView<>();     }    protected ListView<T> createTargetListView() {         return new ListView<>();     }    public final ListView<T> getSourceListView() {         return sourceListView;     }    public final ListView<T> getTargetListView() {         return targetListView;     } } The factory methods can be used to create standard ListView instances and configure them right there or to return already existing ListView specializations. A company called ACME might already provide a standard set of controls (that implement the company’s marketing concept). Then the factory methods might return a control called ACMEListView.Reference: JavaFX Tip 10: Custom Composite Controls from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
software-development-2-logo

Testing Love and Polyamorous TDD

The rigor and quality of testing in the current software development world leaves a lot to be desired. Additionally, it feels like we are living in the dark ages with mysterious edicts about the “right” way to test being delivered by an anointed few vocal prophets with little or no effort being given to education of the general populace about why it is “right”, instead spending effort evangelizing. I use the religious metaphor because to me it seems a very large amount of the rhetoric is intended to sway people to follow a particular set of ceremonies without doing a good job of explaining the underpinnings and why these ceremonies have value. I read with interest an post by David Heinmeier Hansson titled TDD is dead. Long live testing that pretty much sums up my opinion of the current state of affairs in this regard. A number of zealots proclaiming TDD to be the “one true way”, but not a lot of evidence that this is actually true. Yes, Test Driven Development (TDD) is a good practice, but it is NOT necessarily superior to: integration testing, penetration testing, operational readiness testing, disaster recovery testing, and any of a large number of other validation activities that should be a part of a software delivery practice. Embracing and developing a passion for all manner of testing are important parts of being a well rounded, enlightened, and effective software developer. Since I have this perspective, I’m particularly jostled by the perspective outlined by Bob Martin’s treatise on Monogamous TDD is the one true way. In direct reaction to this post, I propose we start to look at software validation as an entire spectrum of practices that we’ll just call Polyamorous TDD. The core tenets of this approach are that openness, communication, the value of people, and defining quality are more important than rigorous adherence to specific practices. Furthermore, we should promote the idea that the best way to do things often depends on what particular group of people are doing them (note, Agile folks, does this sound familiar?) I chose the term Polyamory instead of Polygamy or Monogamy for the following reasons:It implies there are multiple “correct” ways to test your code, but you are not necessarily married to any one, or even a specific group of them It further suggests that testing is about openness and loving your code instead of adhering to some sort of contract On a more subtle level, it reenforces the notion that acceptance, openness, and communication are valued over strict adherence to a particular practice or set of practices.All this is an attempt to promote the idea that It’s more important that we come together to build understanding about the values provided by better validating our code than to convert people to the particular practice that works for us individually. To build this understanding, we need to more actively embrace new ideas, explore them, and have open lines of communication that are free of drama and contention. This will not happen if we cannot openly admit the notion that there is more than one “right” way to do things and we keep preaching the same tired story that many of us have already heard and have frankly progressed beyond. It’s OK to be passionate about a particular viewpoint, but we still need to be respectful and check our egos at the door when it comes to this topic. As a final tangental reference regarding Uncle Bob’s apparent redefinition of the word fundamentalism in his post. As far as I can see the definition he chose to use was never actually known to be used for this. While I understand what he was trying to say, he was just wrong and DHH’s use of the word based on the definition I’ve seen is still very apt. From the dictionary: 1 a often capitalized : a movement in 20th century Protestantism emphasizing the literally interpreted Bible as fundamental to Christian life and teaching b : the beliefs of this movement c : adherence to such beliefs 2 : a movement or attitude stressing strict and literal adherence to a set of basic principles <Islamic fundamentalism> <political fundamentalism> Uncle Bob, please try to be careful when rebuffing folks on improper word usage and try not to invent new definitions of words, especially when you’re in a position of perceived authority in our little world of software development. Express your opinion or facts and be careful when you state an opinion as if it where a fact, it only lends to confusion and misunderstanding.Reference: Testing Love and Polyamorous TDD from our JCG partner Mike Mainguy at the mike.mainguy blog....
rabbitmq-logo

RabbitMQ in Multiple AWS Availability Zones

When working with AWS, in order to have a highly-available setup, once must have instances in more than one availability zone (AZ ≈ data center). If one AZ dies (which may happen), your application should continue serving requests. It’s simple to setup your application nodes in multiple AZ (if they are properly written to be stateless), but it’s trickier for databases, message queues and everything that has state. So let’s see how to configure RabbitMQ. The first steps are not relevant only to RabbitMQ, but to any persistent data solution. First (no matter whether using CloudFormation or manual setup), you must:  Have a VPC. It might be possible without a VPC, but I can’t guarnatee that, especially the DNS hostnames as discussed below Declare private subnets (for each AZ) Declare the RabbitMQ autoscaling group (recommended to have one) to span multiple AZs, using: "AvailabilityZones" : { "Fn::GetAZs" : { "Ref": "AWS::Region" } }Declare the RabbitMQ autoscaling group to span multiple subnets using the VPCZoneIdentifier property Declare the LoadBalancer in front of your RabbitMQ nodes (that is the easiest way to ensure even distribution of load to your Rabbit cluster) to span all the subnets Declare LoadBalancer to be "CrossZone": trueThen comes the specific RabbitMQ configuration. Generally, you have two options:RabbitMQ Clustering RabbitMQ FederationClustering is not recommended in case of WAN, but the connection between availability zones can be viewed (maybe a bit optimistically) as a LAN (This detailed post assumes otherwise, but this thread hints that using a cluster over multiple AZ is fine). With federation, you declare your exchanges to send all messages they receive to another node’s exchange. This is pretty useful in a WAN, where network disconnects are common and speed is not so important. But it may still be applicable in a multi-AZ scenario, so it’s worth investigating. Here is an example, with exact commands to execute, of how to achieve that, using the federation plugin. The tricky part with federation is auto-scaling – whenever you need to add a new node, you should modify (some of) your existing nodes configuration in order to set the new node as their upstream. You may also need to allow other machines to connect as guest to rabbitmq ([{rabbit, [{loopback_users, []}]}] in your rabbitmq conf file), or find a way to configure a custom username/password pair for federation to work. With clustering, it’s a bit different, and in fact simpler to setup. All you have to do is write a script to automatically join a cluster on startup. This might be a shell script or a python script using the AWS SDK. The main steps in such a script (which, yeah, frankly, isn’t that simple), are:Find all running instances in the RabbitMQ autoscaling group (using the AWS API filtering options) If this is the first node (the order is random and doesn’t matter), assume it’s the “seed” node for the cluster and all other nodes will connect to it If this is not the first node, connect to the first node (using rabbitmqctl join_cluster rabbit@{node}), where {node} is the instance private DNS name (available through the SDK) Stop RabbitMQ when doing all configurations, start it after your are doneIn all cases (clustering or federation), RabbitMQ relies on domain names. The easiest way to make it work is to enable DNS hostnames in your VPC: "EnableDnsHostnames": true. There’s a little hack here, when it terms to joining a cluster – the AWS API may return the fully qualified domain name, which includes something like “.eu-west-1.compute.internal” in addition to the ip-xxx-xxx-xxx-xxx part. So when joining the RabbitMQ cluster, you should strip this suffix, otherwise it doesn’t work. The end results should allow for a cluster, where if a node dies and another one is spawned (by the auto-scaling group), the cluster should function properly. Comparing the two approaches with PerfTest yields better throughput for the clustering option – about 1/3 less messages were processed with federation, and also there was a bit higher latency. The tests should be executed from an application node, towards the RabbitMQ ELB (otherwise you are testing just one node). You can get PerfTest and execute it with something like that (where the amqp address is the DNS name of the RabbitMQ load balancer): wget http://www.rabbitmq.com/releases/rabbitmq-java-client/v3.3.4/rabbitmq-java-client-bin-3.3.4.tar.gz tar -xvf rabbitmq-java-client-bin-3.3.4.tar.gz cd rabbitmq-java-client-bin-3.3.4 sudo sh runjava.sh com.rabbitmq.examples.PerfTest -x 10 -y 10 -z 10 -h amqp://internal-foo-RabbitMQEl-1GM6IW33O-1097824.eu-west-1.elb.amazonaws.com:5672 Which of the two approaches you are going to pick up depends on your particular case, but I would generally recommend the clustering option. A bit more performant and a bit easier to setup and to support in a cloud environment, with nodes spawning and dying often.Reference: RabbitMQ in Multiple AWS Availability Zones from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
java-logo

Setting up development environment for GWT

Introduction This is part of series intended to develop cross platform mobile applications in Java. In this blog post we will see what GWT is and set up the development environment for GWT. GWT is an open source development toolkit for developing complex browser based Ajax applications. Using GWT you can develop Rich Internet Applications(RIA) in Java which is then compiled into JavaScript and is cross browser compliant. Some of the advantages of developing web applications in GWT are: Since GWT apps can be developed in Java, you can enjoy all the advantages of developing in Java like auto-complete,Debugging, refactoring, code reuse, polymorphism, over riding, over loading. And Java has large set of tools for development like Eclipse, NetBeans,JUnit and Maven etc which you can use for developing Rich Internet Applications(RIA). Maintaining large JavaScript projects is not easier when compared to Java projects. But you need JavaScript to run Rich Internet Applications in browser. GWT combines both the advantages. You develop the applications in Java and then they are compiled into JavaScript, so you are having best of both. GWT is almost similar to AWT and Swing packages in Java and so has a low learning curve for Java Developers. Supporting several browsers in the market is a difficult tasks. Each browser creates it own set of problems. GWT solves this problem by creating optimized JavaScript code for each browser specifically addressing the issues with that browser. So you can support almost all the major browsers including Android , iPad and iPhone based browsers without worrying about quirks for each browser. Developing UI’s in Java is difficult task compared to other aspects of Java programming. GWT solves it by providing several UI widgets and also you can extend the existing widgets and create your own custom widgets if you wish to. Some of the limitations of GWT are: Since the java code is compiled into JavaScript which runs on the browsers, the JavaScript needs to be enabled on the browsers. The applications will not work if the JavaScript is not enabled on the browser. If you have specialist UI designers who can create HTML pages, this will not work. You may have to implement what ever Designer created again in GWT. Web Pages created by GWT cannot be indexed by search engines since these applications are generated dynamically. I think except the second drawback in the list, others don’t matter much. It is difficult to provide a rich internet application just in HTML. You will need JavaScript to create rich internet applications. Some apps provide a limited version of apps which work if JavaScript is disabled but majority of apps require JavaScript , so you are not the one there. And there is no reason why large number of users will disable JavaScript on their browsers. And there is a work around for indexing by search engines. The index page can be created in html, and the remaining pages can be created in GWT. GWT provides an option to define index page in html format. So the index page can still be indexed by search engines and the other pages are mostly dynamically created data, so they don’t need to come up in the search unless you they are some kind of content management systems(CMS). Like the case with all the frameworks, GWT doesn’t solve all the issues, but it surely makes the java developers more productive developing the web applications, provides cross browser support and works perfectly for complex enterprise web applications. GWT Development Environment Setup We will start setting up the development environment for GWT applications. Java Since you will developing the applications in Java before they are compiled into JavaScript, you need to set up Java development environment. Once Java environment is set up, let us configure the environment for GWT. GWT SDK Download the latest version of GWT SDK from the GWT project site. http://www.gwtproject.org/download.html  Go to the above link and click on ‘Download GWT SDK’ highlighted in the above screen. Then unzip the downloaded GWT SDK to your preferred location on your hard disk and it will look similar to the below screen shot.  You need to install the eclipse plug-in for GWT to develop GWT applications on eclipse easily. To install GWT eclipse plug-in, launch eclipse, go to Help –> Eclipse Marketplace.  Search for GWT in the eclipse market place.  Find out ‘Google Plugin for Eclipse’ and the version number should match the version of the eclipse you are using. If you are using Eclipse Kepler(eclipse 4.3), you need to look for ‘Google Plugin for Eclipse 4.3) and click on ‘Install’.  Accept the license and click on ‘Next’ to continue installation.  It takes some time to download and install the plug-in.  While installing you will get a security warning. Just click on ‘Ok’ to continue the installation.  Restart the eclipse after the installation of plug-in is completed. After restarting the eclipse, you will see the GWT plug-in added to the eclipse tool bar.  And we need to install extensions to the browser you are planning to use for running the GWT app in development mode. We will see later what the development mode is, but for now let us install the plugins for the browser to complete our set up of the development environment. If you launch the app in Dev mode without installing the plug-in, the browser will display a message similar to below. In Internet Explorer:On Chrome:  When you click on Download, On Chrome, you will be redirected to the Chrome extensions page from where you can install the GWT Developer plug-in.Click on ‘FREE’ button to install the plug-in on Chrome browser. On IE, clicking on ‘Download’ button will download a ‘GWTDevPluginSetup.exe’ set up and launching it will install the GWT developer plug-in for IE. Restart the browsers after the GWT developer plug-in is installed. Unfortunately the latest versions of Mozilla Firefox doesn’t support the GWT Developer Plugin. So you can’t work in Development mode on latest version of Firefox, but GWT already provides a super dev mode which doesn’t require installing any plug-in during development. So you can use Firefox in super dev mode during development mode. Conclusion We completed setting up the required development environment for developing applications in GWT. We can start creating GWT applications !!Reference: Setting up development environment for GWT from our JCG partner Venkata Kiran at the Coding square blog....
javafx-logo

JavaFX Tip 9: Do Not Mix Swing / JavaFX

The JavaFX team has tried very hard to convince us that migrating from Swing to JavaFX is easy because of the option to embed Swing content in a JavaFX UI and vice versa. I must admit that I never tried it myself but based on the feedback I am getting from my customers I can only recommend to not mix Swing and JavaFX. At the time of this writing there were over 200 unresolved issues (120+ bugs) related to Swing integration (registered with the JavaFX issue management system).           Issue Types The following is a list of issues that you might encounter if you still decide to go with it:Appearance – there will always be a noticeable difference between the parts that were done in Swing and those that were done in JavaFX. Fields will show different font quality, different borders, different focus highlighting, etc…. Flickering – you might encounter flickering in your UI Behaviour – controls will behave differently. The user will be able to scroll JavaFX controls with a gesture but not the Swing controls. The columns of a JavaFX TableView control will autosize when you double click the line between two column headers, the Swing JTable does not. Threading – you are constantly dealing with issues related to the use of two different UI threads (the Swing EDT and the JavaFX application thread). You will run into freezing UIs and inconsistent state issues. Window Management - controlling which window will be on top of which other windows and which window is blocking input (modality) for other windows becomes difficult / impossible. Popup windows might no longer hide themselves automatically. Focus Handling- the wrong window might get the focus. Focus traversal between Swing controls and JavaFX controls might not work. Context Menus – you might not be able to close the menu by clicking somewhere else in the UI or you might end up with two context menus open at the same time (one controlled by JavaFX, one controlled by Swing). Cursor – setting different cursors on different controls / components will not work as expected. Drag and Drop – wether within the SwingNode itself or between Swing and JavaFX, exceptions are heading your way. Performance – the performance / rendering speed of JavaFX controls mixed with Swing components will degrade.Conclusion What does this mean now? Well, it means that in the end you will not save time if you are following the Swing/JavaFX mixing strategy. At least not if quality is important to you. If your focus is only on making features available then maybe, but if you want to ship a commercial grad / professional application, then no. If you have already decided to migrate to JavaFX, then do the Full Monty and redo your entire application in JavaFX, it is worth the wait.Reference: JavaFX Tip 9: Do Not Mix Swing / JavaFX from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
software-development-2-logo

Trust instead of Threats

According to Dr. Gary McGraw’s ground breaking work on software security, up to half of security mistakes are made in design rather than in coding. So it’s critical to prevent – or at least try to find and fix – security problems in design. For the last 10 years we’ve been told that we are supposed to do this through threat modeling aka architectural risk analysis – a structured review of the design or architecture of a system from a threat perspective to identify security weaknesses and come up with ways to resolve them. But outside of a few organizations like Microsoft threat modeling isn’t being done at all, or at best only on an inconsistent basis. Cigital’s work on the Build Security In Maturity Model (BSIMM), which looks in detail at application security programs in different organizations, has found that threat modeling doesn’t scale. Threat modeling is still too heavyweight, too expensive, too waterfally, and requires special knowledge and skills. The SANS Institute’s latest survey on application security practices and tools asked organizations to rank the application security tools and practices they used the most and found most effective. Threat modeling was second last. And at the 2014 RSA Conference, Jim Routh at Aetna, who has implemented large-scale secure development programs in 4 different major organizations, admitted that he has not yet succeeded in injecting threat modeling into design anywhere “because designers don’t understand how to make the necessary tradeoff decisions”. Most developers don’t know what threat modeling is, or how do to it, never mind practice it on a regular basis. With the push to accelerate software delivery, from Agile to One-Piece Continuous Flow and Continuous Deployment to production in Devops, the opportunities to inject threat modeling into software development are disappearing. What else can we do to include security in application design? If threat modeling isn’t working, what else can we try?There are much better ways to deal with security than threat modelling… like not being a tool. JeffCurless, comment on a blog post about threat modeling Security people think in terms of threats and risks – at least the good ones do. They are good at exploring negative scenarios and what-ifs, discovering and assessing risks. Developers don’t think this way. For most of them, walking through possibilities, things that will probably never happen, is a waste of time. They have problems that need to be solved, requirements to understand, features to deliver. They think like engineers, and sometimes they can think like customers, but not like hackers or attackers. In his new book on Threat Modeling Adam Shostack says that telling developers to “think like an attacker” is like telling someone to think like a professional chef. Most people know something about cooking, but cooking at home and being a professional chef are very different things. The only way to know what it’s like to be a chef and to think like a chef is to work for some time as a chef. Talking to a chef or reading a book about being a chef or sitting in meetings with a chef won’t cut it. Developrs aren’t good at thinking like attackers, but they constantly make assertions in design, including important assertions about dependencies and trust. This is where security should be injected into design. Trust instead of Threats Threats don’t seem real when you are designing a system, and they are hard to quantify, even if you are an expert. But trust assertions and dependencies are real and clear and concrete. Easy to see, easy to understand, easy to verify. You can read the code, or write some tests, or add a run-time check. Reviewing a design this way starts off the same as a threat modeling exercise, but it is much simpler and less expensive. Look at the design at a system or subsystem-level. Draw trust boundaries between systems or subsystems or layers in the architecture, to see what’s inside and what’s outside of your code, your network, your datacenter: Trust boundaries are like software firewalls in the system. Data inside a trust boundary is assumed to be valid, commands inside the trust boundary are assumed to have been authorized, users are assumed to be authenticated. Make sure that these assumptions are valid. And make sure to review dependencies on outside code. A lot of security vulnerabilities occur at the boundaries with other systems, or with outside libraries because of misunderstandings or assumptions in contracts. OWASP Application Threat Modeling Then, instead of walking through STRIDE or CAPEC or attack trees or some other way of enumerating threats and risks, ask some simple questions about trust: Are the trust boundaries actually where you think they are, or think they should be? Can you trust the system or subsystem or service on the other side of the boundary? How can you be sure? Do you know how it works, what controls and limits it enforces? Have you reviewed the code? Is there a well-defined API contract or protocol? Do you have tests that validate the interface semantics and syntax? What data is being passed to your code? Can you trust this data – has it been validated and safely encoded, or do you need to take care of this in your code? Could the data have been tampered with or altered by someone else or some other system along the way? Can you trust the code on the other side to protect the integrity and confidentiality of data that you pass to it? How can you be sure? Should you enforce this through a hash or an HMAC or a digital signature or by encrypting the data? Can you trust the user’s identity? Have they been properly authenticated? Is the session protected? What happens if an exception or error occurs, or if a remote call hangs or times out – could you lose data or data integrity, or leak data, does the code fail open or fail closed? Are you relying on protections in the run-time infrastructure or application framework or language to enforce any of your assertions? Are you sure that you are using these functions correctly? These are all simple, easy-to-answer questions about fundamental security controls: authentication, access control, auditing, encryption and hashing, and especially input data validation and input trust, which Michael Howard at Microsoft has found to be the cause of half of all security bugs. Secure Design that can actually be done Looking at dependencies and trust will find – and prevent – important problems in application design. Developers don’t need to learn security jargon, try to come up with attacker personas or build catalogs of known attacks and risk weighting matrices, or figure out how to use threat modeling tools or know what a cyber kill chain is or understand the relative advantages of asset-centric threat modeling over attacker-centric modeling or software-centric modeling. They don’t need to build separate models or hold separate formal review meetings. Just look at the existing design, and ask some questions about trust and dependencies. This can be done by developers and architects in-phase as they are working out the design or changes to the design – when it is easiest and cheapest to fix mistakes and oversights. And like threat modeling, questioning trust doesn’t need to be done all of the time. It’s important when you are in the early stages of defining the architecture or when making a major design change, especially a change that makes the application’s attack surface much bigger (like introducing a new API or transitioning part of the system to the Cloud). Any time that you are doing a “first of”, including working on a part of the system for the first time. The rest of the time, the risks of getting trust assumptions wrong should be much lower. Just focusing on trust won’t be enough if you are building a proprietary secure protocol. And it won’t be enough for high-risk security features – although you should be trying to leverage the security capabilities of your application framework or a special-purpose security library to do this anyways. There are still cases where threat modeling should be done – and code reviews and pen testing too. But for most application design, making sure that you aren’t misplacing trust should be enough to catch important security problems before it is too late.Reference: Trust instead of Threats from our JCG partner Jim Bird at the Building Real Software blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close