Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


How to Cut Corners and Stay Cool

You have a task assigned to you, and you don’t like it. You are simply not in the mood. You don’t know how to fix that damn bug. You have no idea how that bloody module was designed, and you don’t know how it works. But you have to fix the issue, which was reported by someone who has no clue how this software works. You get frustrated and blame that stupid project manager and programmers who were fired two years ago. You spend hours just to find out how the code works. Then even more hours trying to fix it. In the end, you miss the deadline and everybody blames you. Been there, done that? There is, however, an alternative approach that provides a professional exit from this situation. Here are some tips I recommend to my peers who code with me in projects. In a nutshell, I’m going to explain how you can cut corners and remain professional, 1) protecting your nerves, 2) optimizing your project’s expenses, and 3) increasing the quality of the source code. Here is a list of options you have, in order of preference. I would recommend you start with the first one on the list and proceed down when you have to. Create Dependencies, Blame Them, and Wait This is the first and most preferable option. If you can’t figure out how to fix an issue or how to implement a new feature, it’s a fault of the project, not you. Even if you can’t figure it out because you don’t know anything about Ruby and they hired you to fix bugs in a Ruby on Rails code base — it’s their fault. Why did they hire you when you know nothing about Ruby? So be positive; don’t blame yourself. If you don’t know how this damn code works, it’s a fault of the code, not you. Good code is easy to understand and maintain. Don’t try to eat spaghetti code; complain to the chef and ask him or her to cook something better (BTW, I love spaghetti). How can you do that? Create dependencies — new bugs complaining about unclear design, lack of unit tests, absence of necessary classes, or whatever. Be creative and offensive — in a constructive and professional way, of course. Don’t get personal. No matter who cooked that spaghetti, you have nothing against him or her personally. You just want another dish, that’s all. Once you have those dependencies reported, explain in the main ticket that you can’t continue until all of them are resolved. You will legally stop working, and someone else will improve the code you need. Later, when all dependencies are resolved and the code looks better, try to get back to it again. If you still see issues, create new dependencies. Keep doing this until the code in front of you is clean and easy to fix. Don’t be a hero — don’t rush into fixing the bad code you inherited. Think like a developer, not a hacker. Remember that your first and most important responsibility as a disciplined engineer is to help the project reveal maintainability issues. Who will fix them and how is the responsibility of a project manager. Your job is to reveal, not to hide. By being a hero and trying to fix everything in the scope of a single task, you’re not doing the project a favor — you’re concealing the problem(s). Edit: Another good example of a dependency may be a question raised at, for example, or a user list of a third-party library. If you can’t find a solution yourself and the problem is outside of the scope of your project — submit a question to SO and put its link to the source code (in JavaDoc block, for example). Demand Better Documentation and Wait All dependencies are resolved and the code looks clean, but you still don’t understand how to fix the problem or implement a new feature. It’s too complex. Or maybe you just don’t know how this library works. Or you’ve never done anything like that before. Anyhow, you can’t continue because you don’t understand. And in order to understand, you will need a lot of time — much more than you have from your project manager or your Scrum board. What do you do? Again, think positively and don’t blame yourself. If the software is not clear enough for a total stranger, it’s “their” fault, not yours. They created the software in a way that’s difficult to digest and modify. But the code is clean; it’s not spaghetti anymore. It’s a perfectly cooked lobster, but you don’t know how to eat lobster! You’ve never ate it before. The chef did a good job; he cooked it well, but the restaraunt didn’t give you any instructions on how to eat such a sophisticated dish. What do you do? You ask for a manual. You ask for documentation. Properly designed and written source code must be properly documented. Once you see that something is not clear for you, create new dependencies that ask for better documentation of certain aspects of the code. Again, don’t be a hero and try to understand everything yourself. Of course you’re a smart guy, but the project doesn’t need a single smart guy. The project needs maintainable code that is easy to modify, even by someone who is not as smart as yourself. So do your project a favor: reveal the documentation issue, and ask someone to fix it for you. Not just for you, for everybody. The entire team will benefit from such a request. Once the documentation is fixed, you will continue with your task, and everybody will get source code that is a bit better than it was before. Win-win, isn’t it? Reproduce the Bug and Call It a Day Now the code is clean, the documentation is good enough, but you’re stuck anyway. What to do? Well, I’m a big fan of test-driven development, so my next suggestion would be to create a test that reproduces the bug. Basically, this is what you should start every ticket with, be it a bug or a feature. Catch the bug with a unit test! Prove that the bug exists by failing the build with a new test. This may be rather difficult to achieve, especially when the software you’re trying to fix or modify was written by idiots someone who had no idea about unit testing. There are plenty of techniques that may help you find a way to make such software more testable. I would highly recommend you read Working Effectively with Legacy Code by Michael Feathers. There are many different patterns, and most of them work. Once you manage to reproduce the bug and the build fails, stop right there. That’s more than enough for a single piece of work. Skip the test (for example, using @Ignore annotation in JUnit 4) and commit your changes. Then add documentation to the unit test you just created, preferably in the form of a @todo. Explain there that you managed to reproduce the problem but didn’t have enough time to fix it. Or maybe you just don’t know how to fix it. Be honest and give all possible details. I believe that catching a bug with a unit test is, in most cases, more than 80% of success. The rest is way more simple: just fix the code and make the test pass. Leave this job to someone else. Prove a Bug’s Absence Very often you simply can’t reproduce a bug. That’s not because the code is not testable and can’t be used in a unit test but because you can’t reproduce an error situation. You know that the code crashes in production, but you can’t crash it in a test. The error stack trace reported by the end user or your production logging system is not reproducable. It’s a very common situation. What do you do? I think the best option here is to create a test that will prove that the code works as intended. The test won’t fail, and the build will remain clean. You will commit it to the repository and … report that the problem is solved. You will say that the reported bug doesn’t really exist in real life. You will state that there is no bug — “our software works correctly; here is the proof: see my new unit test.” Will they believe you? I don’t think so, but they don’t have a choice. They can’t push you any further. You’ve already done something — created a new test that proves everything is fine. The ticket will be closed and the project will move on. If, later on, the same problem occurs in production, a new bug will be reported. It will be linked to your ticket. Your experience will help someone investigate the bug further. Maybe that guy will also fail to catch the bug with a test and will also create a new, successful and “useless” test. And this may happen again and again. Eventually, this cumulative group experience will help the last guy catch the bug and fix it. Thus, a new passing test is a good response to a bug that you can’t catch with a unit test. Disable the Feature Sometimes the unit test technique won’t work, mostly because a bug will be too important to be ignored. They won’t agree with you when you show them a unit test that proves the bug doesn’t exist. They will tell you that “when our users are trying to download a PDF, they get a blank page.” And they will also say they don’t really care about your bloody unit tests. All they care about is that PDF document that should be downloadable. So the trick with a unit test won’t work. What do you do? It depends on many factors, and most of these factors are not technical. They are political, organizational, managerial, social, you name it. However, in most cases, I would recommend you disable that toxic feature, release a new version, and close the ticket. You will take the problem off your shoulders and everybody will be pleased. Well, except that poor end user. But this is not your problem. This is the fault of management, which didn’t organize pre-production testing properly. Again, don’t take this blame on yourself. Your job is to keep the code clean and finish your tickets in a reasonable amount of time. Their job is to make sure that developers, testers, DevOps, marketers, product managers, and designers work together to deliver the product with an acceptable number of errors. Production errors are not programmers’ mistakes, though delayed tickets are. If you keep a ticket in your hands for too long, you become an unmanageable unit of work. They simply can’t manage you anymore. You’re doing something, trying to fix the bug, saying “I’m trying, I’m trying …” How can they manage such a guy? Instead, you should deliver quickly, even if it comes at the cost of a temporarily disabled feature. Say No OK, let’s say none of the above works. The code is clean, the documentation is acceptable, but you can’t catch the bug, and they don’t accept a unit test from you as proof of the bug’s absence. They also don’t allow you to disable a feature, because it is critical to the user experience. What choices do you have? Just one. Be professional and say “No, I can’t do this; find someone else.” Being a professional developer doesn’t mean being able to fix any problem. Instead, it means honesty. If you see that you can’t fix the problem, say so as soon as possible. Let them decide what to do. If they eventually decide to fire you because of that, you will remain a professional. They will remember you as a guy who was honest and took his reputation seriously. In the end, you will win. Don’t hold the task in your hands. The minute you realize you’re not the best guy for it or you simply can’t fix it — notify your manager. Make it his problem. Actually, it is his problem in the first place. He hired you. He interviewed you. He decided to give you this task. He estimated your abilities and your skills. So it’s payback time. Your “No!” will be very valuable feedback for him. It will help him make his next important management decisions. On the other hand, if you lie just to give the impression you’re a guy who can fix anything and yet fail in the end, you will damage not only your reputation but also the project’s performance and objectives.Reference: How to Cut Corners and Stay Cool from our JCG partner Yegor Bugayenko at the About Programming blog....

Sneak peek into the JCache API (JSR 107)

This post covers the JCache API at a high level and provides a teaser – just enough for you to (hopefully) start itching about it ;-) In this post ….JCache overview JCache API, implementations Supported (Java) platforms for JCache API Quick look at Oracle Coherence Fun stuff – Project Headlands (RESTified JCache by Adam Bien) , JCache related talks at Java One 2014, links to resources for learning more about JCacheWhat is JCache? JCache (JSR 107) is a standard caching API for Java. It provides an API for applications to be able to create and work with in-memory cache of objects. Benefits are obvious – one does not need to concentrate on the finer details of implementing the Caching and time is better spent on the core business logic of the application. JCache components The specification itself is very compact and surprisingly intuitive. The API defines high level components (interfaces) some of which are listed belowCaching Provider – used to control Caching Managers and can deal with several of them, Cache Manager – deals with create, read, destroy operations on a Cache Cache – stores entries (the actual data) and exposes CRUD interfaces to deal with the entries Entry – abstraction on top of a key-value pair akin to a java.util.MapHierarchy of JCache API components JCache Implementations JCache defines the interfaces which of course are implemented by different vendors a.k.a Providers.Oracle Coherence Hazelcast Infinispan ehcache Reference Implementation – this is more for reference purpose rather than a production quality implementation. It is per the specification though and you can be rest assured of the fact that it does in fact pass the TCK as wellFrom the application point of view, all that’s required is the implementation to be present in the classpath. The API also provides a way to further fine tune the properties specific to your provider via standard mechanisms. You should be able to track the list of JCache reference implementations from the JCP website link JCache provider detectionJCache provider detection happens automatically when you only have a single JCache provider on the class path You can choose from the below options as wellJava Platform supportCompliant with Java SE 6 and above Does not define any details in terms of Java EE integration. This does not mean that it cannot be used in a Java EE environment – it’s just not standardized yet. Could not be plugged into Java EE 7 as a tried and tested standard Candidate for Java EE 8Project Headlands: Java EE and JCache in tandemBy none other than Adam Bien himself ! Java EE 7, Java SE 8 and JCache in action Exposes the JCache API via JAX-RS (REST) Uses Hazelcast as the JCache provider Highly recommended !Oracle Coherence This post deals with high level stuff w.r.t JCache in general. However, a few lines about Oracle Coherence in general would help put things in perspectiveOracle Coherence is a part of Oracle’s Cloud Application Foundation stack It is primarily an in-memory data grid solution Geared towards making applications more scalable in general What’s important to know is that from version 12.1.3 onwards, Oracle Coherence includes a reference implementation for JCache (more in the next section)JCache support in Oracle CoherenceSupport for JCache implies that applications can now use a standard API to access the capabilities of Oracle Coherence This is made possible by Coherence by simply providing an abstraction over its existing interfaces (NamedCache etc). Application deals with a standard interface (JCache API) and the calls to the API are delegated to the existing Coherence core library implementation Support for JCache API also means that one does not need to use Coherence specific APIs in the application resulting in vendor neutral code which equals portability How ironic – supporting a standard API and always keeping your competitors in the hunt ;-) But hey! That’s what healthy competition and quality software is all about ! Talking of healthy competition – Oracle Coherence does support a host of other features in addition to the standard JCache related capabilities. The Oracle Coherence distribution contains all the libraries for working with the JCache implementationThe service definition file in the coherence-jcache.jar qualifies it as a valid JCache provider implementationCurious about Oracle Coherence ?Quick Starter page Documentation Installation Further reading about Coherence and JCache combo – Oracle Coherence documentationJCache at Java One 2014 Couple of great talks revolving around JCache at Java One 2014Come, Code, Cache, Compute! by Steve Millidge Using the New JCache by Brian Oliver and Greg LuckHope this was fun :-) Cheers !Reference: Sneak peek into the JCache API (JSR 107) from our JCG partner Abhishek Gupta at the Object Oriented.. blog....

The New Agile–Picking A Winner

We’ve talked about scaling and methodology on how to build stuff, but hey, we want to know what to build, dammit! Unfortunately, SAFe, scrum, XP, or Lean Startup don’t talk about what we need to build. Just how to get it out the door. Picking a winning product seems like the holy grail. Business analysis and product management methods can help us with it. The problem is that some of the regular methods didn’t deliver. They relied on gut feelings, or subjective interpretation of information. And they didn’t take into account how complex the market is. A couple of methods, that have a lot in common, have surfaced in the last decade, and changed the way we think about the problem. They try to revert the way we build to a way that makes sense in our reality. First we need to explain what doesn’t make sense. Let’s say I, the product manager,  know I need a feature. The team spends 6 months building it, then shows it to me, and I say, “it’s not what I wanted” . Sounds familiar? We can say that the customer (me) never knows what he wants, but that’s getting off easy. The real problem is that the team didn’t understand what problem it was solving. If the team understood the problem, they might have come up with a different feature to solve the problem, and wouldn’t have wasted 6 precious months. The problem starts with us not asking “what problem does the feature solve”. Even more so, we should start with the problem, and then build the feature that solves it. This is what stands behind Chris Matts’ Feature Injection. Apart from the cool name, there is real understanding of the need, and only then suggestion for a solution. It’s what stands behind Gojko Adzic’s Impact Mapping, in which we start with the impact we want to achieve, then figure out how to build it. And it’s what stand behind Liz Keogh’s Capability Red, where we want to understand the customer in order to develop a solution for their problem. And don’t forget of Design Thinking –  actually seeing what the problems are, then coming up with a solution. In all cases, there might be a smaller, easier and cheaper solution than the one we thought of first. Once we understand the need or the problem, we can suggest multiple solutions, than pick the one to try. You are probably thinking at this point: This is just common sense. These are simple ideas. Why, I could have thought of that! Well, it’s not that common. Organizations continue to develop what their product people think, they deploy the solution and then see what happens. Only, then understanding what happened in hindsight. That’s a big ass feedback cycle. It may not be common, it may not be easy, but it is simple agile sense.Reference: The New Agile–Picking A Winner from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

How to secure Openfire XMPP server

Introduction Instant Messaging (IM) or chat is a service used broadly today in many applications like google talk or the more recent google Hangouts, Yahoo! Talk etc. It is based on the Extensible Messaging and Presence Protocol (XMPP) or Jabber protocol. Usually, a client-server architecture is followed, where specially-built XMPP clients exchange XMPP messages with XMPP servers who propagate the messages to other clients. Chatrooms are allowed where many clients share a common view to talk, rosters that allow your ‘friends’ to see your status, i.e. your availability etc. If you want to learn more about the XMPP protocol, take a look at reference [2]. Openfire is an XMPP enabled real-time-collaboration (RTC) server provided by Ignite Realtime under the Open Source GPL. In this article we will focus on how to make Openfire to exchange secure messages. We will show how to allow clients to exchange XMPP messages with Openfire in a secure (aka encrypted) way. We will go one step further and show how to exchange XMPP messages among Openfire servers in a secure way. If the destination client is not served by the local Openfire server, then Openfire needs to communicate with another Openfire server instance that serves the destination client. In this case, the two Openfire instances need to exchange XMPP messages in an encrypted way as shown in the following figure.The articles in the references at the end of this article give you an introduction on how to install and configure Openfire. In this article we will focus on security issues only. SSL Certificates Openfire can generate self-signed DSA and RSA certificates through its Administrator Web Console (Server Settings | Server Certificates) (Figure 4). These can be used for server-to-server as well as for client-server communication (Server Settings | Security Settings) as shown in Figure 2.To generate self-signed certificates, click on the link (Click here to generate…) as shown in the following figure. You will need to restart Openfire for the changes to take effect.Figure 4 shows an RSA and a DSA self-signed certificate generated by the above procedure. To allow server or client secure communication with this openfire server, make sure you have checked the Accept self-signed certificates. Server dialback over TLS is now available checkbox (see Figure 2) and select the Required radio button for both server-server and client-client communication if you want end-to-end encryption. For further security you can restrict the remote servers that you can connect to (see Figure 5). For secure client-to-server communication you must also enable TLS or SSL encryption in your client[1], something I could not find supported on Spark, the open source IM client provided by IgniteRealtime. Jitsi and Trillian are other modern IM clients that support TLS/SSL certificates. However, self-signed certificates are vulnerable to man-in-the-middle attacks. For that reason, the most secure way is to import certificates provided by a Certification Authority (CA). Certificates can either be created by Openfire and signed by a CA after generating a Certificate Signing Request (CSR) or they can be created and signed by the CA to be later imported into Openfire. In the latter case a private key and the signed certificate need to be imported to Openfire. The issuer information for the certificates should be updated before sending the CSR to CA. Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files might be needed to be installed in the server that hosts Openfire if you get an “Invalid key size” error. The certificate size varies per CA; let’s assume that it is 2048 bytes. CAs can support both CSR and PFX/PKCS12 certificates. Table 1. Minimum key size per year to be securetable, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; }Year AES RSA, DH, ElGamal2010 78 13692020 86 18812030 93 24932040 101 3214Examples of importing these types of certificates is shown below. The following sections explain the procedure in more detail.CSR: $ keytool -certreq -keyalg RSA -alias myalias -file certreq.txt -keystore /path/to/openfire/server/instance/resources/security/keystore (a keystore must already exist) PKCS12: $ keytool -genkey -alias {desired alias certificate} -keystore {path to keystore.pfx} -storepass {password} -validity 365 -keyalg RSA -keysize 2048 -storetype pkcs12 (problems seem to have been encountered; e.g. might need to convert them to .pem)Keep in mind that certificates issued by CAs are not free[2]. So, acquiring certificates from a Certification Authority (CA) for each client of your enterprise might be prohibited. For more information on configuring SSL with Openfire see reference [1]. The following section is an adaptation of the “SSL Guide” to use for server-to-server communication. SSL Guide Openfire’s SSL support is built using the standard Java security SSL implementation ( A server SSL connection uses two sets of certificates to secure the connection. The first set is called a “keystore“. The keystore contains the keys and certificates for the server. These security credentials are used to prove to clients or other servers that the server is legitimately operating on behalf of a particular domain. If your server will only need to act as one domain, you only need one key entry and certificate in the keystore. Keys are stored in the keystore under aliases. Each alias corresponds to a domain name (e.g. “”). The second set of certificates is called the “truststore” and is used to verify that another server (or a client) is legitimately operating on behalf of a particular user. In the vast majority of cases, the truststore is empty and the server will not attempt to validate client connections using SSL. Instead, the XMPP authentication process verifies users in-band. However, for server-to-server communication you must require SSL authentication. In other words, for an Openfire server to be able to communicate with another, its truststore must contain a valid certificate for the other Openfire server. Certificates attempt to guarantee that a particular party is who they claim to be. Certificates are trusted based on who signed the certificate. If you only require light security, are deploying for internal use on trusted networks, etc. you can use “self-signed” certificates. Self-signed certificates encrypt the communication channel between client-server or server-server. However the client or other server must verify the legitimacy of the self-signed certificate through some other channel. The most common client reaction to a self-signed certificate is to ask the user whether to trust the certificate, or to silently trust the certificate is legitimate. Unfortunately, blindly accepting self-signed certificates opens up the system to ‘man-in-the-middle’ attacks. The advantage of a self-signed certificate is you can create them for free which is great when cost is a major concern, or for testing and evaluation. In addition, you can safely use a self-signed certificate if you can verify that the certificate you’re using is legitimate. So if a system administrator creates a self-signed certificate, then personally installs it on a client’s or another server’s truststore (so that the certificate is trusted) you can be assured that the SSL connection will only work between the client or another server and the correct server. For higher security deployments, you should get your certificate signed by a certificate authority (CA). Servers’ truststores will usually contain the certificates of the major CA’s and can verify that a CA has signed a certificate. This chain of trust allows servers to trust certificates from other servers they’ve never interacted with before. Certificate signing is similar to a public notary (with equivalent amounts of verification of identity, record keeping, and costs). The Oracle JDK (version 1.5.x or later) ships with all the security tools you need to configure SSL with Openfire. The most important is the keytool located in the $JAVA_HOME/bin directory of the JDK. Oracle JVMs persist keystores and truststores on the filesystem as encrypted files. The keytool is used to create, read, update, and delete entries in these files. Openfire ships with a self-signed “dummy” certificate designed for initial evaluation testing. You will need to adjust the default configuration for most deployments. In order to configure SSL on your server you need to complete the following tasks:Decide on your Openfire server’s domain. Import CA root certificates into the truststore. Create new keystore and import CA root certificates into the keystore too Have a certificate authority (CA) certify the SSL server certificate.Generate a certificate signing request (CSR). Submit your CSR to a CA for signing.Import the server certificate into the keystore. Adjust the Openfire configuration with proper keystore and truststore1. Decide on a Server Domain The Openfire server domain should follow the naming convention used in your organisation (if any). For our testing FQDN is: Send the FQDN to the CA; this will allow them to create a reference number for this user in their system (other CAs may use different procedures to issue certificates). They will then send you the reference number which you require when you create the keystore. (E.g. FQDN: -> Reference number: 0123456). 2. Import CA root certificates into the truststore To be able to verify offline other Openfire servers using certificates, you need to obtain the CA’s root certificates and import them into the truststore.Backup and delete /path/to/openfire/server/instance/resources/security/truststoreImport each certificate using the keytool (a new truststore will be created when the first certificate is imported): $ cd /path/to/openfire/server/instance/resources/security $ keytool -printcert -file 1.CA_ROOT.cer Owner: CN=RootCA, O=CA Issuer: CN=RootCA, O=CA Serial number: 8968c69c7e23143c Valid from: Tue Oct 06 19:10:22 CEST 2009 until: Fri Oct 06 19:10:22 CEST 2034 ... $ keytool -printcert -file 2.SCA.cer Owner: OU=SCA, O=CA Issuer: CN=RootCA, O=ca Serial number: 9e95a6bad57ae Valid from: Thu Oct 11 17:44:57 CEST 2012 until: Fri Jan 01 05:59:00 CET 2016 Certificate fingerprints: ... $ keytool -importcert -alias CA_ROOT -keystore truststore -file 1.CA_ROOT.cer Enter keystore password: Re-enter new password: Owner: CN=RootCA, O=CA Issuer: CN=RootCA, O=CA ... Trust this certificate? [no]: yes Certificate was added to keystore$ keytool -importcert -alias SCA -keystore truststore -file 2.SCA.cer Enter keystore password: Certificate was added to keystore or use import instead of importcert. Verify that your truststore contains the two certificates: $ keytool -list -keystore truststore Enter keystore password:Keystore type: JKS Keystore provider: SUNYour keystore contains 2 entriesca_root, Nov 5, 2014, trustedCertEntry, Certificate fingerprint (SHA1): 8A:9E:6F:8D:2A:C1:A1:9A:1F:6C:01:85:D9:6C:08:00:69:70 sca, Nov 5, 2014, trustedCertEntry, Certificate fingerprint (SHA1): AE:EB:DA:AF:4E:40:0D:00:4C:2F:66:6E:50:7E:1C:19:17:FF3. Create new keystoreBackup and delete: /path/to/openfire/server/instance/resources/security/keystore.To create a new keystore of size 2048 bits you need to generate a new key pair (make sure to use the reference number from CA as CN): $ cd /path/to/openfire/server/instance/resources/security/ $ keytool -genkeypair -alias chat.mycompany.com_rsa -keyalg RSA -keysize 2048 -keystore keystore Enter keystore password: Re-enter keystore password: What is your first and last name? [Unknown]: 0123456 What is the name of your organizational unit? [Unknown]: IT What is the name of your organization? [Unknown]: MyCompany What is the name of your City or Locality? [Unknown]: Somewhere What is the name of your State or Province? [Unknown]: What is the two-letter country code for this unit? [Unknown]: XX Is CN=0123456, OU=IT, O=MyCompany, L=Somewhere, ST=Unknown, C=XX correct? [no]: yesEnter key password for <chat.mycompany.com_rsa> (RETURN if same as keystore password):To view it: $ keytool -list -v -keystore keystore Keystore type: JKS Keystore provider: SUNYour keystore contains 1 entryAlias name: chat.mycompany.org_rsa Creation date: Nov 4, 2014 Entry type: PrivateKeyEntry Certificate chain length: 1 Certificate[1]: Owner: CN=0123456, OU=IT, O=MyCompany, L=Somewhere, ST=Unknown, C=XX Issuer: CN=0123456, OU=IT, O=MyCompany, L=Somewhere, ST=Unknown, C=XX Serial number: 590d8837 Valid from: Tue Nov 04 17:54:18 CET 2014 until: Mon Feb 02 17:54:18 CET 2015 Certificate fingerprints: MD5: 60:86:13:C0:B7:26:4D:92:10:54:65:C7 SHA1: 1B:14:90:EB:68:F3:CC:C1:51:1C:96:55:24:48:1E:2D SHA256: F5:1C:62:CA:D3:AF:E7A8:9E:51:B2:7F:8543:12:81:46:CD:60:A9:CE Signature algorithm name: SHA256withRSA Version: 3Extensions:#1: ObjectId: Criticality=false SubjectKeyIdentifier [ KeyIdentifier [ 0000: D1 5E 5C D1 02 44 B6 47 9D DE .^\..D....P..G.. 0010: 1D AE 49 ] ] ******************************************* *******************************************You must add the root and sub-root certificates of CA to the keystore also for the signed certificate to be verified offline. $ keytool -importcert -alias CA_ROOT -keystore keystore -file 1.CA_ROOT.cer Enter keystore password: Re-enter new password: Owner: CN=RootCA, O=CA Issuer: CN=RootCA, O=CA ... Trust this certificate? [no]: yes Certificate was added to keystore$ keytool -importcert -alias SCA -keystore keystore -file 2.SCA.cer Enter keystore password: Certificate was added to keystore or use import instead of importcert. Verify that your keystore contains three certificates: $ keytool -list -keystore truststore Enter keystore password:Keystore type: JKS Keystore provider: SUNYour keystore contains 3 entriesca_root, Nov 5, 2014, trustedCertEntry, Certificate fingerprint (SHA1): 8A:9E:6F:8D:1F:6C:01:85:D9:6C:21:91:08:00:69:70 sca, Nov 5, 2014, trustedCertEntry, Certificate fingerprint (SHA1): AE:EB:0D:00:4C:2F:66:6E:50:7E:7E:CC:1C:19:17:FF ...Restart the Openfire server after you have modified any of the above system properties.4.a Generate a certificate signing request (CSR) $ cd /path/to/openfire/server/instance/resources/security/ $ keytool -certreq -alias chat.mycompany.org_rsa -file chat.mycompany.org_rsa.csr -keystore keystoreThis command will generate the CSR chat.mycompany.org_rsa.csr. To verify the CSR issue the command: $ keytool -printcertreq -file chat.mycompany.org_rsa.csr PKCS #10 Certificate Request (Version 1.0) Subject: CN=0123456, OU=IT, O=MyCompany, L=Somewhere, ST=Unknown, C=XX Public Key: X.509 format RSA keyExtension Request:#1: ObjectId: Criticality=false SubjectKeyIdentifier [ KeyIdentifier [ 0000: D1 5E 5C D1 02 44 A2 B6 47 9D DE .^\..D....P..G.. 0010: 1D AE 49 ] ] 4.b Submit your CSR to a CA for signing Send the generated CSR chat.mycompany.org_rsa.csr to your CA to get it signed. 5. Import the server certificate into the keystore After you have received the certificate signed by the CA, you must import it using the keytool. $ cd /path/to/openfire/server/instance/resources/security $ keytool -importcert -alias chat.mycompany.org_rsa -keystore keystore -file chat.mycompany.org_rsa.ceror import instead of importcert. It is important that the alias does not already have an associated key or you’ll receive an error. Restart openfire for the changes to take effect. To find out more about keytool see here. To change the default truststore (keystore) password: $ keytool -storepasswd -keystore truststore (keystore) keytool will ask for the old password (by default it is changeit) then the new password. Epilogue In this article we described how to enable secure communication between two Openfire servers and between Openfire and a client. For that purpose, we saw how we can enable self-signed SSL certificates as well as how we can import certificates signed by a Certificate Authority. You can experiment with other types of certificates (like e.g. PFX/PKCS12) and adapt the procedure according to your CA. Happy (secure) chatting. References:Openfire SSL Guide. Saint-Andre P., Smith K., Troncon R. (2009), XMPP: The Definitive Guide, O’ Reilly. Tsagklis I. (2010a), “Openfire server installation – Infrastructure for Instant Messaging”, Java Code Geeks. Tsagklis I. (2010b), “Openfire server configuration – Infrastructure for Instant Messaging”, Java Code Geeks. Tsagklis I. (2010c), “XMPP IM with Smack for Java applications – Infrastructure for Instant Messaging”, Java Code Geeks.[1]See e.g. [2]Some free ones are CAcert, StartCom/StartSSL, Let’s Encrypt. ...

Retrospective Dialogue Sheets updates and changes

When I say changes, I’m not saying anything about changes to the sheets themselves. I mean I’ve been thinking about how I make the sheets available and I’m going to make two changes. Firstly, I’m going to remove the print-on-demand service for the sheets. Second, I’m going to remove the need to register before downloading a sheet. You can still find all the sheets at just now they will be a little harder to get and a little easier to get. Now I’d like to explain why I’m making these changes. I’ve always felt I needed to offer people the option to get printed sheets, hence the print on demand service. However, not many people use the service. I might have once thought I could make a little money off the service but I long ago gave up any dreams, it doesn’t get used enough to make me rich! It seems most people either have large printers or get the sheets printed by their local print shop – I use Kall-Kwik myself. To complicate matters, when it does make me a little money, the company which provides the service (Mimeo) send me a cheque. Or rather a check, against a US bank in US dollars. Since the sums are small the cheques cost more to cash then they are worth. This is a shame because when Lulu or LeanPub send me money – in dollars – they use PayPal which I can access easily. Add to this the complexities of keeping the print-on-demand shop up to date and its just not worth it. Second, the need to register. When I first made the sheets available I really wanted feedback on who was using them, how they found them and so on. In the early days I would e-mail people and ask “What was your experience?” That was like getting blood out of a stone. Very few people replied. Those who did gave me very useful feedback which allowed me to adjust the sheets and made me feel good. I stopped this about the time InfoQ published my piece on Dialogue Sheets – three years ago, wow how time flies. Since then there have been too many downloads to go ask for feedback – O, I could mail a few people but that requires work. Right now there have been over 1,300 registration in the last two years, and I known there were several thousand before then. In the meantime a few people considered my request to register an imposition, I’ve had a couple of people tell me to my face. All I wanted was feedback but this put people off. I have on occasions given dialogue sheets away – they are part of the package when you buy a course of me but I also regularly give spare sheets away after conference presentations. When I do so I ask – no beg – people to send me feedback, but they rarely – no never – do. I remember a man from the BBC who took a spare sheet at Agile Cambridge. He promised to send me feedback on what his team thought. I never heard from him again. I guess it went in the bin. Maybe I’m a little bitter but actually, the point I’m trying to make is: its hard to get feedback! I once planned to send a newsletter to everyone. But I never got around to it. I once hoped a mailing list would take off, but it never did. Probably if I had put more effort into any of those things they would have done better but as it is I think Dialogue Sheets are a success. Thousands of downloads are a successes. Popular articles in InfoQ and elsewhere are a success. Conference sessions using the sheets are always well received – and I’m doing one again at DevWeek next month in London. I sometimes meet people who know of me because of the sheets, that is a success. And I get occasional e-mails telling me the sheets are being used and they are good. Anyway, you have not tried them yet, give Dialogue Sheets a go in your next retrospective.Reference: Retrospective Dialogue Sheets updates and changes from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog....

SweetHomeHub: Home Control with Raspberry Pi and MQTT – Part 1

Since quite a long time I am working on my universal Raspberry Pi based Intertechno-Remote (see former posts 1 2 3 4):I tried different approaches to trigger/control my remote control service via a custom HTTPServer/-Handler and a simple Vert.x verticle. Since MQTT v3.1.1 turns out as on of the de-facto standard protocols for the IoT I also implemented an MQTT client. This MQTT client basically follows two design patterns:One topic for each device For each device a topic is defined. Its state can be controlled by publishing a message with payload “ON” or “OFF”. Pro:the user must not know about the address code of the Intertechno device changes of the address must not be published the message is simply “ON” or “OFF to control the deviceContra:the user must know the topic for each device the user can only control configured devicesOne topic for a JSON message Pro:very flexible to control the devicesContra:the user must know about the syntax of the JSON and the coding of devicesSolution: Provide both options One topic for each deviceMy configuration is very simple On start-up the Client is searching for sweethomehub-config.xml in the users home directory which is then unmarshalled from JAXB. This configuration contains the codes and the topic for each device and the MQTT settings for the broker connection: <configuration> <devices> <device> <houseCode>a</houseCode> <groupId>1</groupId> <deviceId>1</deviceId> <name>Light Front-Door</name> <mqttTopic>front/lights/door</mqttTopic> </device> <device> <houseCode>a</houseCode> <groupId>1</groupId> <deviceId>2</deviceId> <name>Light Terrace</name> <mqttTopic>garden/lights/terrace</mqttTopic> </device> <device> <houseCode>a</houseCode> <groupId>1</groupId> <deviceId>3</deviceId> <name>Fountain</name> <mqttTopic>garden/devices/fountain</mqttTopic> </device> <device> <houseCode>a</houseCode> <groupId>1</groupId> <deviceId>4</deviceId> <name>Light Garden</name> <mqttTopic>garden/lights/ambiente</mqttTopic> </device> <device> <houseCode>a</houseCode> <groupId>1</groupId> <deviceId>3</deviceId> <name>Light Living Room</name> <mqttTopic>livingroom/lights/ambiente</mqttTopic> </device> </devices> <mqttClientConfiguration> <mqttClientId>SweethoemMQTTClientId</mqttClientId> <mqttBrokerAddress>sweethome</mqttBrokerAddress> <mqttBrokerPort>1883</mqttBrokerPort> <mqttMessagesBaseTopic>sweethome</mqttMessagesBaseTopic> </mqttClientConfiguration> </configuration> And there is one additional topic awaiting the JSON commands: sweethome/devices/jsoncommand { "devices":[ { "device":{ "name": "Light Front-Door", "houseCode": "a", "groupId": "1", "deviceId": "1" }, "command":"ON" }, { "device":{ "name": "Light Terrace", "houseCode": "a", "groupId": "1", "deviceId": "2" }, "command":"ON" }, { "device":{ "name": "Light Living Room", "houseCode": "a", "groupId": "1", "deviceId": "3" }, "command":"ON" } ] } The central method to handle arrived messages:The JsonDeviceCommandProcessor:And the doSwitch methods:MQTT Client running on the Raspberry Pi waiting for messages:… and receiving command messages:Testing the receiver with MQTT.fx …Complete code can be found at BitBucket.Reference: SweetHomeHub: Home Control with Raspberry Pi and MQTT – Part 1 from our JCG partner Jens Deters at the JavaFX Delight blog....

Netflix Governator Tests – Introducing governator-junit-runner

Consider a typical Netflix Governator junit test.                     public class SampleWithGovernatorJunitSupportTest {@Rule public LifecycleTester tester = new LifecycleTester();@Test public void testExampleBeanInjection() throws Exception { tester.start(); Injector injector = tester .builder() .withBootstrapModule(new SampleBootstrapModule()) .withModuleClass(SampleModule.class) .usingBasePackages("") .build() .createInjector();BlogService blogService = injector.getInstance(BlogService.class); assertThat(blogService.get(1l), is(notNullValue())); assertThat(blogService.getBlogServiceName(), equalTo("Test Blog Service")); }} This test is leveraging the Junit rule support provided by Netflix Governator and tests some of the feature sets of Governator – Bootstrap modules, package scanning, configuration support etc. The test however has quite a lot of boilerplate code which I felt could be reduced by instead leveraging a Junit Runner type model. As a proof of this concept, I am introducing the unimaginatively named project – governator-junit-runner, consider now the same test re-written using this library: @RunWith(GovernatorJunit4Runner.class) @LifecycleInjectorParams(modules = SampleModule.class, bootstrapModule = SampleBootstrapModule.class, scannedPackages = "") public class SampleGovernatorRunnerTest {@Inject private BlogService blogService;@Test public void testExampleBeanInjection() throws Exception { assertNotNull(blogService.get(1l)); assertEquals("Test Blog Service", blogService.getBlogServiceName()); }} Most of the boilerplate is now implemented within the Junit runner and the parameters required to bootstrap Governator is passed in through the LifecycleInjectorParams annotation. The test instance itself is a bound component and thus can be injected into, this way the instances which need to be tested can be injected into the test itself and asserted on. If you want more fine-grained control, the LifecycleManager itself can be injected into the test!: @Inject private Injector injector;@Inject private LifecycleManager lifecycleManager; If this interests you, more samples are at the project site here.Reference: Netflix Governator Tests – Introducing governator-junit-runner from our JCG partner Biju Kunjummen at the all and sundry blog....

Creating Android Apps with Groovy 2.4

A few days ago Groovy 2.4 was released. One of the major news is that Groovy now officially supports Android application development. To see how this works I used Groovy to create a small ToDo list example application for Android. In this post I will show which steps are required to create an Android application with Groovy and how Groovy can simplify Android application development. The following screen shows the example application written in Groovy. You can find the full source code on GitHub.        Running Groovy on Android First we need Android Studio which already contains the latest Version of the Android SDK. Over last year the default Android environment changed from Eclipse and Ant to Android Studio (build on IntelliJ) and Gradle. To run Groovy on Android we will need a Gradle Plugin, so make sure you are not using the old Eclipse/Ant based development tools. We create a new Android project in Android Studio and add the following lines to our build files: Top level build file (<project>/build.gradle): buildscript {   ..   dependencies {     ..     classpath 'org.codehaus.groovy:gradle-groovy-android-plugin:0.3.5'   } } App build file (<project>/app/build.gradle): apply plugin: ''// apply Groovy Android plugin after the standard Android plugin apply plugin: 'groovyx.grooid.groovy-android'dependencies {   ..   compile 'org.codehaus.groovy:groovy:2.4.0:grooid' } Source and documentation of the Groovy Android Gradle Plugin can be found on GitHub. This is all configuration we need, now we can move straight to Groovy code. Please note that Groovy code files need to be placed in src/main/groovy instead of src/main/java. Adding Groovy files to src/main/java will not work! Developing Android apps in Groovy works exactly the same way as in Java. Because of Groovy’s  Java interoperability, you can use and extend Android classes like you would do in Java. Improving the Groovy experience in Android Studio Android Studio already contains the Groovy plugin. So, you get Groovy syntax support out of the box. However, you might miss the option to create new Groovy classes from the context menu. Luckily this can be easily configured in Android Studio. You simply have to create a new File Template (Settings > File and code templates) and add the following template code: #if (${PACKAGE_NAME} && ${PACKAGE_NAME} != "")package ${PACKAGE_NAME};#end class ${NAME} { }Now you can quickly create new Groovy classes using the context menu:You might also look into this plugin which fixes the issue that you get no auto completion when overriding super class methods in Groovy. Thanks to @arasthel92 who told me about this plugin. Running Groovy applications Running Groovy apps is identical to running Java apps. We can simply press the run (or debug) button in Android Studio and deploy the application to a connected (or emulated) device. The cool thing is that the Groovy debugger works out of the box. We can debug running Groovy applications from Android Studio.The great parts The cool thing about Groovy is that it reduces the lines of code you need to write a lot. Its dynamic nature also lets you get rid of all the type casts that are typically required when working with Android. One example of this can be found in ToDoListActivity.onResume(). In this method the data of an Android ListAdapter is modified. With Java this would look like this: ArrayAdapter<ToDo> adapter = (ArrayAdapter<ToDo>) getListAdapter(); ToDoApplication application = (ToDoApplication) getApplication() adapter.clear() adapter.addAll(application.getToDos()); adapter.notifyDataSetChanged() With Groovy we can simply rewrite it like this: listAdapter.clear() listAdapter.addAll(application.toDos) listAdapter.notifyDataSetChanged() Groovy’s closures are another feature that comes very handy when working with Android. To add a click listener to a button you write something like this in Java Button button = (Button) findViewById(; button.setOnClickListener(new View.OnClickListener() {   @Override   void onClick(View v) {     ...   } }); With Groovy it is just def button = findViewById( button.onClickListener = {   ... } See CreateNewTodoActivity for a complete example. Be aware of runtime errors Dynamic languages typically tend to increase the number of errors you find at runtime. Depending on the app you are building this can be a serious issue. Larger Apps can have a significant deployment time (the app needs to be packaged, copied via USB to the device, installed and started on the device). Groovy’s @CompileStatic annotation can be used to tackle this issue. It can be used to compile parts of your Groovy code statically to reduce runtime errors: @CompileStatic class ToDoListActivity extends ListActivity {   .. } The downside of @CompileStatic is that you can (obviously) no longer use Groovy’s dynamic features. Quick summary It is very easy to get Groovy running on Android. Because of Groovy’s interoperability with Java it is also very easy to mix Groovy and Java in the same application. The Groovy integration in Android Studio is actually better than I have expected (however, some manual tweaks are still required).Reference: Creating Android Apps with Groovy 2.4 from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....

Fixing Elasticsearch Allocation Issues

Last week I was working with some Logstash data on my laptop. There are around 350 indices that contain the logstash data and an index that holds the metadata for Kibana 4. When trying to start the single node cluster I have to wait a while, until all indices are available. Some APIs can be used to see the progress of the startup process. The cluster health API gives general information about the state of the cluster and indicates if the cluster health is green, yellow or red. After a while the number of unassigned shards didn’t change anymore but the cluster still stayed in a red state.     curl -XGET 'http://localhost:9200/_cluster/health?pretty=true' { "cluster_name" : "elasticsearch", "status" : "red", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 1850, "active_shards" : 1850, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 1852 } One shard couldn’t be recovered: 1850 were ok but it should have been 1851. To see the problem we can use the cat indices command that will show us all indices and their health. curl http://localhost:9200/_cat/indices [...] yellow open logstash-2014.02.16 5 1 1184 0 1.5mb 1.5mb red open .kibana 1 1 yellow open logstash-2014.06.03 5 1 1857 0 2mb 2mb [...] The .kibana index didn’t turn yellow. It only consists of one primary shard that couldn’t be allocated. Restarting the node and closing and opening the index didn’t help. Looking at elasticsearch-kopf I could see that primary and replica shards both were unassingned (You need to tick the checkbox that says hide special to see the index). Fortunately there is a way to bring the cluster in a yellow state again. We can manually allocate the primary shard on our node. Elasticsearch provides the Cluster Reroute API that can be used to allocate a shard on a node. When trying to allocate the shard of the index .kibana I first got an exception. curl -XPOST "http://localhost:9200/_cluster/reroute" -d' { "commands" : [ { "allocate" : { "index" : ".kibana", "shard" : 0, "node" : "Jebediah Guthrie" } } ] }'[2015-01-30 13:35:47,848][DEBUG][action.admin.cluster.reroute] [Jebediah Guthrie] failed to perform [cluster_reroute (api)] org.elasticsearch.ElasticsearchIllegalArgumentException: [allocate] trying to allocate a primary shard [.kibana][0], which is disabled Fortunately the message already tells us the problem: By default you are not allowed to allocate primary shards due to the danger of losing data. If you’d like to allocate a primary shard you need to tell it Elasticsearch explicitly by setting the property allow_primary. curl -XPOST "http://localhost:9200/_cluster/reroute" -d' { "commands" : [ { "allocate" : { "index" : ".kibana", "shard" : 0, "node" : "Jebediah Guthrie", "allow_primary": "true" } } ] }' For me this helped and my shard got reallocated and the cluster health turned yellow.I am not sure what caused the problems but it is very likely related to the way I am working locally. I am regularly sending my laptop to sleep which is something you never do on a server. Nevertheless I have seen this problem a few times locally which justifies writing down the necessary steps to fix it.Reference: Fixing Elasticsearch Allocation Issues from our JCG partner Florian Hopf at the Dev Time blog....

JavaFX Tip 17: Animated Workbench Layout with AnchorPane

I recently had to implement a layout for an application where the menu area and the status area could be hidden or shown with a slide-in / slide-out animation based on whether the user was logged in or not. The following video shows the the layout in action:              In the past I probably would have implemented this kind of behavior with a custom control and custom layout code (as in “override layoutChildren() method in skin”). But this time my setup was different because I was using afterburner.fx from Adam Bien and now I had FXML and a controller class. So what do do? I decided to try my luck with an anchor pane and to update the constraints on the stack panes via a timeline instance. Constraints are stored in the observable properties map of the stack panes. Whenever these constraints change, a layout of the anchor pane is requested automatically. If this happens without any flickering then we end up with a nice smooth animation. By the way, coming from Swing, I always expect flickering, but it normally doesn’t happen with JavaFX. I ended up writing the following controller class managing the anchor pane and its children stack panes. Please notice the little trick with the intermediate properties menuPaneLocation and bottomPaneLocation. They are required because the animation timeline works with properties. So it updates these properties and whenever they change new anchor pane constraints are applied. import static javafx.scene.layout.AnchorPane.setBottomAnchor; import static javafx.scene.layout.AnchorPane.setLeftAnchor; import javafx.animation.KeyFrame; import javafx.animation.KeyValue; import javafx.animation.Timeline; import; import; import; import; import javafx.fxml.FXML; import javafx.scene.layout.StackPane; import javafx.util.Duration;</code>/** * This presenter covers the top-level layout concepts of the workbench. */ public class WorkbenchPresenter {@FXML private StackPane topPane;@FXML private StackPane menuPane;@FXML private StackPane centerPane;@FXML private StackPane bottomPane;public WorkbenchPresenter() { }private final BooleanProperty showMenuPane = new SimpleBooleanProperty(this, "showMenuPane", true);public final boolean isShowMenuPane() { return showMenuPane.get(); }public final void setShowMenuPane(boolean showMenu) { showMenuPane.set(showMenu); }/** * Returns the property used to control the visibility of the menu panel. * When the value of this property changes to false then the menu panel will * slide out to the left). * * @return the property used to control the menu panel */ public final BooleanProperty showMenuPaneProperty() { return showMenuPane; }private final BooleanProperty showBottomPane = new SimpleBooleanProperty(this, "showBottomPane", true);public final boolean isShowBottomPane() { return showBottomPane.get(); }public final void setShowBottomPane(boolean showBottom) { showBottomPane.set(showBottom); }/** * Returns the property used to control the visibility of the bottom panel. * When the value of this property changes to false then the bottom panel * will slide out to the left). * * @return the property used to control the bottom panel */ public final BooleanProperty showBottomPaneProperty() { return showBottomPane; }public final void initialize() { menuPaneLocation.addListener(it -> updateMenuPaneAnchors()); bottomPaneLocation.addListener(it -> updateBottomPaneAnchors());showMenuPaneProperty().addListener(it -> animateMenuPane()); showBottomPaneProperty().addListener(it -> animateBottomPane());menuPane.setOnMouseClicked(evt -> setShowMenuPane(false));centerPane.setOnMouseClicked(evt -> { setShowMenuPane(true); setShowBottomPane(true); });bottomPane.setOnMouseClicked(evt -> setShowBottomPane(false)); }/* * The updateMenu/BottomPaneAnchors methods get called whenever the value of * menuPaneLocation or bottomPaneLocation changes. Setting anchor pane * constraints will automatically trigger a relayout of the anchor pane * children. */private void updateMenuPaneAnchors() { setLeftAnchor(menuPane, getMenuPaneLocation()); setLeftAnchor(centerPane, getMenuPaneLocation() + menuPane.getWidth()); }private void updateBottomPaneAnchors() { setBottomAnchor(bottomPane, getBottomPaneLocation()); setBottomAnchor(centerPane, getBottomPaneLocation() + bottomPane.getHeight()); setBottomAnchor(menuPane, getBottomPaneLocation() + bottomPane.getHeight()); }/* * Starts the animation for the menu pane. */ private void animateMenuPane() { if (isShowMenuPane()) { slideMenuPane(0); } else { slideMenuPane(-menuPane.prefWidth(-1)); } }/* * Starts the animation for the bottom pane. */ private void animateBottomPane() { if (isShowBottomPane()) { slideBottomPane(0); } else { slideBottomPane(-bottomPane.prefHeight(-1)); } }/* * The animations are using the JavaFX timeline concept. The timeline updates * properties. In this case we have to introduce our own properties further * below (menuPaneLocation, bottomPaneLocation) because ultimately we need to * update layout constraints, which are not properties. So this is a little * work-around. */private void slideMenuPane(double toX) { KeyValue keyValue = new KeyValue(menuPaneLocation, toX); KeyFrame keyFrame = new KeyFrame(Duration.millis(300), keyValue); Timeline timeline = new Timeline(keyFrame);; }private void slideBottomPane(double toY) { KeyValue keyValue = new KeyValue(bottomPaneLocation, toY); KeyFrame keyFrame = new KeyFrame(Duration.millis(300), keyValue); Timeline timeline = new Timeline(keyFrame);; }private DoubleProperty menuPaneLocation = new SimpleDoubleProperty(this, "menuPaneLocation");private double getMenuPaneLocation() { return menuPaneLocation.get(); }private DoubleProperty bottomPaneLocation = new SimpleDoubleProperty(this, "bottomPaneLocation");private double getBottomPaneLocation() { return bottomPaneLocation.get(); } } The following is the FXML that was required for this to work: <?xml version="1.0" encoding="UTF-8"?><?import java.lang.*?> <?import javafx.scene.layout.*?><AnchorPane maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" prefHeight="400.0" prefWidth="600.0" xmlns="" xmlns:fx="" fx:controller="com.workbench.WorkbenchPresenter"> <children> <StackPane fx:id="bottomPane" layoutX="-4.0" layoutY="356.0" prefHeight="40.0" AnchorPane.bottomAnchor="0.0" AnchorPane.leftAnchor="0.0" AnchorPane.rightAnchor="0.0" /> <StackPane fx:id="menuPane" layoutY="28.0" prefWidth="200.0" AnchorPane.bottomAnchor="40.0" AnchorPane.leftAnchor="0.0" AnchorPane.topAnchor="40.0" /> <StackPane fx:id="topPane" prefHeight="40.0" AnchorPane.leftAnchor="0.0" AnchorPane.rightAnchor="0.0" AnchorPane.topAnchor="0.0" /> <StackPane fx:id="centerPane" layoutX="72.0" layoutY="44.0" AnchorPane.bottomAnchor="40.0" AnchorPane.leftAnchor="200.0" AnchorPane.rightAnchor="0.0" AnchorPane.topAnchor="40.0" /> </children> </AnchorPane>Reference: JavaFX Tip 17: Animated Workbench Layout with AnchorPane from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.