Featured FREE Whitepapers

What's New Here?

devops-logo

Deployment Script vs. Rultor

When I explain how Rultor automates deployment/release processes, very often I hear something like: But I already have a script that deploys everything automatically.This response is very common, so I decided to summarize my three main arguments for automated Rultor deployment/release processes in one article:  isolated docker containers, visibility of logs and security of credentials.Read about them and see what Rultor gives you on top of your existing deployment script(s).Before we start with the arguments, let me emphasize that Rultor is a useful interface to your custom scripts. When you decide to automate deployment with Rultor, you don’t throw away any of your existing scripts. You just teach Rultor how to call them. Isolated Docker Containers The first advantage you get once you start calling your deployment scripts from Rultor is the usage of Docker. I’m sure you know what Docker is, but for those who don’t — it is a manager of virtual Linux “machines”. It’s a command line script that you call when you need to run some script in a new virtual machine (aka “container”). Docker starts the container almost immediately and runs your script. The beauty of Docker is that every container is a perfectly isolated Linux environment, with its own file system, memory, processes, etc. When you tell Rultor to run your deployment script, it starts a new Docker container and runs your script there. But what benefit does this give me, you ask? The main benefit is that the container gets destroyed right after your script is done. This means that you can do all pre-configuration inside the container without any fear of conflict with your main working platform. Let me give an example. I’m developing on MacBook, where I install and remove packages which I need for development. At the same time, I have a project that, in order to be deployed, requires PHP 5.3, MySQL 5.6, phing, phpunit, phpcs and xdebug. Every MacOS version needs to be configured specifically to get these applications up and running, and it’s a time-consuming job. I can change laptops, and I can change MacOS versions, but the project stays the same. It still requires the same set of packages in order to run its deployment script successfully. And the project is not in active development any more. I simply don’t need these packages for my day-to-day work, since I’m working with Java more now. But, when I need to make a minor fix to that PHP project and deploy it, I have to install all the required PHP packages and configure them. Only after that can I deploy that minor fix. It is annoying, to say the least. Docker gives me the ability to automate all of this together. My existing deployment script will get a preamble, which will install and configure all necessary PHP-related packages in a clean Ubuntu container. This preamble will be executed on every run of my deployment script, inside a Docker container. For example, it may look like this: My deployment script looked like this before I started to use Rultor: #!/bin/bash phing test git ftp push --user ".." --passwd ".." --syncroot php/src ftp://ftp.example.com/ Just two lines. The first one is a full run of unit tests. The second one is an FTP deployment to the production server. Very simple. But this script will only work if PHP 5.3, MySQL, phing, xdebug, phpcs and phpunit are installed. Again, it’s a lot of work to install and configure them every time I upgrade my MacOS or change a laptop. Needless to say, that if/when someone joins the project and tries to run my scripts, he/she will have to do this pre-installation work again. So, here is a new script, which I’m using now. It is being executed inside a new Docker container, every time: #!/bin/bash # First, we install all prerequisites sudo apt-get install -y php5 php5-mysql mysql sudo apt-get install php-pear sudo pear channel-discover pear.phpunit.de sudo pear install phpunit/PHPUnit sudo pear install PHP_CodeSniffer sudo pecl install xdebug sudo pear channel-discover pear.phing.info sudo pear install phing/phing # And now the same script I had before phing test git ftp push --user ".." --passwd ".." --syncroot php/src ftp://ftp.example.com/ Obviously, running this script on my MacBook (without virtualization) would cause a lot of trouble. Well, I don’t even have apt-get here! Thus, the first benefit that Rultor gives you is an isolation of your deployment script in its own virtual environment. We have this mostly thanks to Docker. Visibility of Logs Traditionally, we keep deployment scripts in some ~/deploy directory and run them with a magic set of parameters. In a small project, you do this yourself and this directory is on your own laptop. In a bigger project, there is a “deployment” server, that has that magic directory with a set of scripts that can be executed only by a few trusted senior developers. I’ve seen this setup many times. The biggest issue here is traceability. It’s almost impossible to find out who deployed what and why some particular deployment failed. The senior deployment gurus simply SSH to the server and run those magic scripts with magic parameters. Logs are usually lost and problem tracking is very difficult or impossible. Rultor offers something different. With Rultor, there is no SSH access to deployment scripts any more. All scripts stay in the .rultor.yml configuration file, and you start them by posting messages in your issue tracking system (for example Github, JIRA or Trac). Rultor runs the script and publishes its full log right to your ticket. The log stays with your project forever. You can always get back to the ticket you were working with and check why deployment failed and what instructions were actually executed. For example, check out this Github issue, where I was deploying a new version of Rultor itself, and failed a few times: yegor256/rultor#563. All my failed attempts are protocolled. I can always get back to them and investigate. For a big project this information is vital. Thus, the second benefit of Rultor versus a standalone deployment script is visibility of every single operation. Security of Credentials When you have a custom script sitting in your laptop or in that secret team deployment server, your production credentials stay close to it. There is just no other way. If your software works with a database, it has to know login credentials (user name, password, DB name, port number, etc.). Well, in the worst case, some people just hard code that information right into the source code. We aren’t even going to discuss this case, that’s how bad it is. But let’s say you separate your DB credentials from the source code. You will have something like a db.properties or db.ini file, which will be attached to the application right before deployment. You can also keep that file directly in the production server, which is even better, but not always possible, especially with PaaS deployments, for example. A similar problem exists with deployments of artifacts to repositories. Say, you’re regularly deploying to RubyGems.org. Your ~/.gem/credentials will contain your secret API key. So, very often, your deployment scripts are accompanied by some files with sensitive and secure information. And these files have this information in a plain, open format. No encryption, no protection. Just user names, passwords, codes and tokens in plain text. Why is this bad? Well, for a single developer with a single laptop this doesn’t sound like a problem. Although, I don’t like the idea of losing a laptop somewhere in an airport with all credentials open and ready to be used. You may argue that there are disc protection tools, like FileVault for MacOS or BestCrypt for Windows. Yes, maybe. But let’s see what happens when we have a team of developers, working together and sharing those deployment scripts and files with credentials. Once you give access to your deployment scripts to a new member of the team, you have to share all that sensitive data. There is just no way around it. In order to use the scripts he/she has to be able to open files with credentials. This is a problem, if you care about the security of your data. Rultor solves this problem by offering an on-the-fly GPG decryption of your sensitive data, right before they are used by your deployment scripts. In the .rultor.yml configuration file you just say: decrypt: db.ini: "repo/db.ini.asc" deploy: script: ftp put db.ini production Then, you encrypt your db.ini using a Rultor GPG key, and fearlessly commit db.ini.asc to the repository. Nobody will be able to open and read that file, except the Rultor server itself, right before running the deployment script. Thus, the third benefit of Rultor versus a standalone deployment script is proper security of sensitive data. Related Posts You may also find these posts interesting:How to Publish to Rubygems, in One Click How to Deploy to CloudBees, in One Click How to Release to Maven Central, in One Click Rultor + Travis Every Build in Its Own Docker ContainerReference: Deployment Script vs. Rultor from our JCG partner Yegor Bugayenko at the About Programming blog....
akka-logo

Akka Notes – Introducing Actors

Anyone who has done multithreading in the past won’t deny how hard and painful it is to manage multithreaded applications. I said manage because it starts out simple and it became a whole lot of fun once you start seeing performance improvements. However, it aches when you see that you don’t have a easier way to recover from errors in your sub-tasks OR those zombie bugs that you find hard to reproduce OR when your profiler shows that your threads are spending a lot of time blocking wastefully before writing to a shared state. I prefer not to talk about how Java concurrency API and their collections made it better and easier because I am sure if you are here, you probably needed more control over the sub-tasks or simply because you don’t like to write locks and synchronized blocks and would prefer a higher level of abstraction. In this series of Akka Notes, we would go through simple Akka examples to explore the various features that we have in the toolkit. What are Actors? Akka’s Actors follow the Actor Model (duh!). Treat Actors like People. People who don’t talk to each other in person. They just talk through mails. Let’s expand on that a bit. 1. Messaging Consider two persons – A wise Teacher and Student. The Student sends a mail every morning to the Teacher and the wise Teacher sends a wise quote back. Points to note :The student sends a mail. Once sent, the mail couldn’t be edited. Talk about natural immutability. The Teacher checks his mailbox when he wishes to do so. The Teacher also sends a mail back (immutable again). The student checks the mailbox at his own time. The student doesn’t wait for the reply. (no blocking)That pretty much sums up the basic block of the Actor Model – passing messages.2. Concurrency Now, imagine there are 3 wise teachers and 3 students – every student sends notes to every other teacher. What happens then? Nothing changes actually. Everybody has their own mailbox. One subtle point to note here is this : By default, Mails in the mailbox are read/processed in the order they arrived. Internally, by default it is a ConcurrentLinkedQueue. And since nobody waits for the mail to be picked up, it is simply a non-blocking message. (There are a variety of built-in mailboxes including bounded and priority based. In fact, we could build one ourself too)3. Failover Imagine these 3 teachers are from three different departments – History, Geography and Philosophy. History teachers replies with a note on an Event in the past, Geography teachers sends an Interesting Place and Philosophy teachers, a quote. Each student sends message to each teacher and gets responses. The student doesnt care which teacher in the department sends the reply back. What if one day, a teacher falls sick? There has to be at least one teacher handling the mails from the department. In this case, another teacher in the department steps up and does the job.Points to note :There could be a pool of Actors who does different things. An Actor could do something that causes an exception. It wouldn’t be able to recover by itself. In which case a new Actor could be created in place of the old one. Alternatively, the Actor could just ignore that one particular message and proceed with the rest of the messages. These are called Directives and we’ll discuss them later.4. Multitasking For a twist, let’s assume that each of these teachers also send the exam score through mail too, if the student asks for it. Similarly, an the Actor could handle more than one type of message comfortably. 5. Chaining What if the student would like to get only one final consolidated trivia mail instead of three? We could do that too with Actors too. We could chain the teachers as a hierarchy. We’ll come back to that later when we talk about Supervisors and revisit the same thought when we talk about Futures. As requested by Mohan, let’s just try to map the analogy components with the the components in the Actor Model.Students and the Teachers becomes our Actors. The Email Inbox becomes the Mailbox component. The request and the response can’t be modified. They are immutable objects. Finally, the MessageDispatcher component in Actor manages the mailbox and routes the messages to the respective Mailbox. Enough talk, let’s cook up some code….Reference: Akka Notes – Introducing Actors from our JCG partner Arun Manivannan at the Rerun.me blog....
apache-camel-logo

More metrics in Apache Camel 2.14

Apache Camel 2.14 is being released later this month. There is a slight holdup due some Apache infrastructure issue which is being worked on. This blog post is to talk about one of the new functions we have added to this release. Thanks to Lauri Kimmel who donated a camel-metrics component, we integrated with the excellent codehale metrics library. So I took this component one step further and integrated it with the Camel routes so we have additional metrics about the route performances using codehale metrics. This allows end users to seamless feed Camel routing information together with existing data they are gathering using codehale metrics. Also take note we have a lot of existing metrics from camel-core which of course is still around. What codehale brings to the table is that they have additional statistical data which we do not have in camel-core. To use the codehale metics all you need to do is:add camel-metrics component enable route metrics in XML or Java codeTo enable in XML you declare a as shown below: &;t;bean id="metricsRoutePolicyFactory" class="org.apache.camel.component.metrics. routepolicy.MetricsRoutePolicyFactory"/>And doing so in Java code is easy as well by calling this method on your CamelContext context.addRoutePolicyFactory(new MetricsRoutePolicyFactory()); Now performance metrics is only useable if you have a way of displaying them, and for that you can use hawtio. Notice you can use any kind of monitoring tooling which can integrate with JMX, as the metrics is available over JMX. The actual data is 100% codehale json format, where a piece of the data is shown in the figure below.The next release of hawtio supports Camel 2.14 and automatic detects if you have enabled route metrics and if so, then shows a sub, where the information can be seen in real time in a graphical charts.The screenshot above is from the new camel-example-servlet-rest-tomcat which we ship out of the box. This example demonstrates another new functionality in Camel 2.14 which is the Rest DSL (I will do a blog about that later). This example enables the route metrics out of the box, so what I did was to deploy this example together with hawtio (the hawtio-default WAR) in Apache Tomcat 8. With hawtio you can also build custom dashboards, so here at the end I have put together a dashboard with various screens from hawtio to have a custom view of a Camel application.Reference: More metrics in Apache Camel 2.14 from our JCG partner Claus Ibsen at the Claus Ibsen riding the Apache Camel blog....
java-logo

A classloading mystery solved

Facing a good old problem I was struggling with some class loading issue on an application server. The libraries were defined as maven dependencies and therefore packaged into the WAR and EAR file. Some of these were also installed into the application server, unfortunately of different version. When we started the application we faced the various exceptions that were related to these types of problems. There is a good IBM article about these exceptions if you want to dig deeper. Even though we knew that the error was caused by some double defined libraries on the classpath it took more than two hours to investigate which version we really needed, and what JAR to remove. Same topic by accident on JUG the same week A few days later we participated the Do you really get Classloaders? session of Java Users’ Society in Zürich. Simon Maple delivered an extremely good intro about class loaders and went into very deep details from the very start. It was an eye opening session for many. I also have to note that Simon works Zero turnaround and he evangelizes for JRebel. In such a situation a tutorial session is usually biased towards the actual product that is the bread for the tutor. In this case my opinion is that Simon was absolutely gentleman and ethic keeping an appropriate balance. Creating a tool, to solve mystery just to create another one A week later I had some time to hobby program that I did not have time for a couple weeks by now and I decided to create a little tool that lists all the classes and JAR files that are on the classpath so investigation can be easier to find duplicates. I tried to rely on the fact that the classloaders are usually instances of URLClassLoader and thus the method getURLs() can be invoked to get all the directory names and JAR files. Unit testing in such a situation can be very tricky, since the functionality is strongly tied to the class loader behavior. To be pragmatic I decided to just do some manual testing started from JUnit so long as long the code is experimental. First of all I wanted to see if the concept is worth developing it further. I was planning to execute the test and look at the log statements reporting that there were no duplicate classes and then executing the same run but second time adding some redundant dependencies to the classpath. I was using JUnit 4.10 The version is important in this case. I executed the unit test from the command line and I saw that there were no duplicate classes, and I was happy. After that I was executing the same test from Eclipse and surprise: I got 21 classes redundantly defined! 12:41:51.670 DEBUG c.j.c.ClassCollector - There are 21 redundantly defined classes. 12:41:51.670 DEBUG c.j.c.ClassCollector - Class org/hamcrest/internal/SelfDescribingValue.class is defined 2 times: 12:41:51.671 DEBUG c.j.c.ClassCollector - sun.misc.Launcher$AppClassLoader@7ea987ac:file:/Users/verhasp/.m2/repository/junit/junit/4.10/junit-4.10.jar 12:41:51.671 DEBUG c.j.c.ClassCollector - sun.misc.Launcher$AppClassLoader@7ea987ac:file:/Users/verhasp/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar ... Googling a bit I could discover easily that JUnit 4.10 has an extra dependency as shown by maven $ mvn dependency:tree [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building clalotils 1.0.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ clalotils --- [INFO] com.verhas:clalotils:jar:1.0.0-SNAPSHOT [INFO] +- junit:junit:jar:4.10:test [INFO] | \- org.hamcrest:hamcrest-core:jar:1.1:test [INFO] +- org.slf4j:slf4j-api:jar:1.7.7:compile [INFO] \- ch.qos.logback:logback-classic:jar:1.1.2:compile [INFO] \- ch.qos.logback:logback-core:jar:1.1.2:compile [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.642s [INFO] Finished at: Wed Sep 03 12:44:18 CEST 2014 [INFO] Final Memory: 13M/220M [INFO] ------------------------------------------------------------------------ This is actually fixed in 4.11 so if I change the dependency to JUnit 4.11 I do not face the issue. Ok. Half of the mystery solved. But why maven command line execution does not report the classes double defined? Extending the logging, logging more and more I could spot out a line: 12:46:19.433 DEBUG c.j.c.ClassCollector - Loading from the jar file /Users/verhasp/github/clalotils/target/surefire/surefirebooter235846110768631567.jar What is in this file? Let’s unzip it: $ ls -l /Users/verhasp/github/clalotils/target/surefire/surefirebooter235846110768631567.jar ls: /Users/verhasp/github/clalotils/target/surefire/surefirebooter235846110768631567.jar: No such file or directory The file does not exit! Seemingly maven creates this JAR file and then deletes it when the execution of the test is finished. Googling again I found the solution. Java loads the classes from the classpath. The classpath can be defined on the command line but there are other sources for the application class loaders to fetch files from. One such a source is the manifest file of a JAR. The manifest file of a JAR file can define what other JAR files are needed to execute the classes in the JAR file. Maven creates a JAR file that contains nothing else but the manifest file defining the JARs and directories to list the classpath. These JARs and directories are NOT returned by the method getURLs(), therefore the (first version) of my little tool did not find the duplicates. For demonstration purposes I was quick enough to make a copy of the file while the mvn test command was running, and got the following output: $ unzip /Users/verhasp/github/clalotils/target/surefire/surefirebooter5550254534465369201\ copy.jar Archive: /Users/verhasp/github/clalotils/target/surefire/surefirebooter5550254534465369201 copy.jar inflating: META-INF/MANIFEST.MF $ cat META-INF/MANIFEST.MF Manifest-Version: 1.0 Class-Path: file:/Users/verhasp/.m2/repository/org/apache/maven/surefi re/surefire-booter/2.8/surefire-booter-2.8.jar file:/Users/verhasp/.m 2/repository/org/apache/maven/surefire/surefire-api/2.8/surefire-api- 2.8.jar file:/Users/verhasp/github/clalotils/target/test-classes/ fil e:/Users/verhasp/github/clalotils/target/classes/ file:/Users/verhasp /.m2/repository/junit/junit/4.10/junit-4.10.jar file:/Users/verhasp/. m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar fi le:/Users/verhasp/.m2/repository/org/slf4j/slf4j-api/1.7.7/slf4j-api- 1.7.7.jar file:/Users/verhasp/.m2/repository/ch/qos/logback/logback-c lassic/1.1.2/logback-classic-1.1.2.jar file:/Users/verhasp/.m2/reposi tory/ch/qos/logback/logback-core/1.1.2/logback-core-1.1.2.jar Main-Class: org.apache.maven.surefire.booter.ForkedBooter$ It really is nothing else than the manifest file defining the classpath. But why does maven do it? Sonatype people, some of whom I also know personally are clever people. They don’t do such a thing just for nothing. The reason to create a temporary JAR file to start the tests is that the length of the command line is limited on some of the operating systems that the length of the classpath may exceed. Even though Java (since Java 6) itself resolves wildcard characters in the classpath it is not an option to maven. The JAR files are in different directories in the maven repo each having long name. Wildcard resolution is not recursive, there is a good reason for it, and even if it were you just would not like to have all your local repo on the classpath. ConclusionDo not use JUnit 4.10! Use something older or newer, or be prepared for surprises. Understand what a classloader is and how it works, what is does. Use an operating system that has huge limit for the maximum size of a command line length. Or just live with the limitation.Something else? Your ideas?Reference: A classloading mystery solved from our JCG partner Peter Verhas at the Java Deep blog....
software-development-2-logo

What SonarQube Is NOT

The age when SonarQube was not very popular has passed a lot time ago. Nowadays is considered the de-facto tool for…. Wait a minute! What the heck is SonarQube? I’ve been asked several times to help people install and configure SonarQube but I’m very surprised that most of them have a not very realistic idea of what is SonarQube. Things are getting worse when articles like this one (from a very well known company) mislead even more those that are interested in code quality and want to play around with SonarQube. So back to basics and let me tell you what SonarQube is NOT.  SonarQube is NOT a build tool. Period. There are several tools out there like Maven, Ant, Gradle etc. that do a perfect job on that field. SonarQube expects that before you analyze a project it has been already compiled and built by your favorite build tool.SonarQube is NOT (only) a static code analyzer : It’s not a replacement for FindBugs or CPPCheck or any other similar tool. On the contrary, not only it offers its own static code analysis mechanism that detects coding rules violations but at the same time it’s integrated with external tools like the ones I mentioned. The result is that you can get, homogenized, in a single report all issues detected by a variety of static and dynamic analysis tools.SonarQube is NOT a code coverage tool : Clearly NOT. Again it’s integrated with the most popular test coverage tools like JaCoCo, Cobertura, PHPUnit etc. but it doesn’t compute code coverage itself. It reads pre-generated unit test report files and displays them in an extremely convenient dasboard.SonarQube is NOT a code formatter. It’s not allowed to modify your code in any way. However you can get formatting suggestions by enabling the CheckStyle, CPPCheck, ScalaStyle rules you want to follow.SonarQube is NOT a continuous integration system to run your nightly builds : You can integrate it with the most popular CI Engines to apply Continuous Inspection but it’s not their replacement.SonarQube is NOT just another manual code review tool. Indeed SonarQube offers a very powerful mechanism that facilitates code reviews but this is not a standalone features. It’s tight to the issues detection mechanism so every code review can be easily associated to the exact part of the problematic code and the developer that caused it.So what SonarQube is? It’s a code quality management platform that allows developer teams to manage, track and eventually improve the quality of the source code.  It’s a web based application that keeps historical data of a variety of metrics and gives trends of leading and lagging indicators for all seven deadly sins of developers. I really hope that this post will clear things out around SonarQube and help people understand its value from the first day.Reference: What SonarQube Is NOT from our JCG partner Patroklos Papapetrou at the Only Software matters blog....
software-development-2-logo

RESTful API and a Web Site in the Same URL

Look at Github RESTful API, for example. To get information about a repository you should make a GET request to api.github.com/repos/yegor256/rultor. In response, you will get a JSON document with all the details of the yegor256/rultor repository. Try it, the URL doesn’t require any authentication. To open the same repository in a nice HTML+CSS page, you should use a different URL: github.com/yegor256/rultor. The URL is different, the server-side is definitely different, but the nature of the data is exactly the same. The only thing that changes is a representation layer. In the first case, we get JSON; in the second — HTML. How about combining them? How about using the same URL and the same server-side processing mechanism for both of them? How about shifting the whole rendering task to the client-side (the browser) and letting the server work solely with the data?XSLT is the technology that can help us do this. In “XML+XSLT in a Browser” I explained briefly how it works in a browser. In a nutshell, the server returns an XML with some data and a link to the XSL stylesheet. The stylesheet, being executed in a browser, converts XML to HTML. XSL language is as powerful as any other rendering engine, like JSP, JSF, Tiles, or what have you. Actually, it is much more powerful. Using this approach we literally remove the entire rendering layer (“View” in the MVC paradigm) from the server and move it to the browser. If we can make it possible, the web server will exponse just a RESTful API, and every response page will have an XSL stylesheet attached. What do we gain? We’ll discuss later, at the end of the post. Now, let’s see what problems we will face:JSON doesn’t have a rendering layer. There is no such thing as XSLT for JSON. So, we will have to forget about JSON and stay with XML only. For me, this sounds perfectly all right. Others don’t like XML and prefer to work with JSON only. Never understood them :) XSLT 2.0 is not supported by all browsers. Even XSLT 1.0 is only supported by some of them. For example, Internet Explorer 8 doesn’t support XSLT at all. Browsers support only GET and POST HTTP methods, while traditional RESTful APIs exploit also, at least, PUT and DELETE.The first problem is not really a problem. It’s just a matter of taste (and level of education). The last two problems are much more serious. Let’s discuss them. XSL Transformation on the Server XSLT is not supported by some browsers. How do we solve this? I think that the best approach is to parse the User-Agent HTTP header in every request and make a guess, whether this particular version of the browser supports XSLT or not. It’s not so difficult to do, since this compatibility information is public. If the browser doesn’t support XSLT, we can do the transformation on the server side. We already have the XML with data, generated by the server, and we already have the XSL attached to it. All we need to do is to apply the latter to the former and obtain an HTML page. Then, we return the HTML to the browser. Besides that, we can also pay attention to the Accept header. If it is set to application/xml or text/xml, we return XML, no matter what User-Agent is saying. This means, basically, that some API client is talking to us, not a browser. And this client is not interested in HTML, but in pure data in XML format. POST Instead of PUT There is no workaround for this. Browsers don’t know anything about PUT or DELETE. So, we should also forget them in our RESTful APIs. We should design our API using only two methods: GET and POST. Is this even possible? Yes. Why not? It won’t look as fancy as with all six methods (some APIs also use OPTIONS and HEAD), but it will work. What Do We Gain? OK, here is the question — why do we need this? What’s wrong with the way most people work now? Why can’t we make a web site separate from the API? What benefits do we get if we combine them? I’ve been combining them in all web applications I’ve worked with since 2011. And the biggest advantage I’m experiencing is avoiding code duplication. It is obvious that in the server we don’t duplicate controllers (in the case of MVC). We have one layer of controllers, and they control both the API and the web site (since they are one thing now). Avoiding code duplication is a very important achievement. Moreover, I believe that it is the most important target for any software project. These small web apps work exactly as explained above: s3auth.com, stateful.co, bibrarian.com. They are all open source, and you can see their source code in Github. Related Posts You may also find these posts interesting:XML+XSLT in a Browser Avoid String Concatenation Why NULL is Bad? OOP Alternative to Utility ClassesReference: RESTful API and a Web Site in the Same URL from our JCG partner Yegor Bugayenko at the About Programming blog....
apache-maven-logo

How to Release to Maven Central, in One Click

When I release a new version of jcabi-aspects, a Java open source library, to Maven Central, it takes 30 seconds of my time. Maybe even less. Recently, I released version 0.17.2. You can see how it all happened, in Github issue #80:                As you see, I gave a command to Rultor, and it released a new version to Maven central. I didn’t do anything else. Now let’s see how you can do the same. How you can configure your project so that the release of its new version to Maven Central takes just a few seconds of your time. By the way, I assume that you’re hosting your project in Github. If not, this entire tutorial won’t work. If you are still not in Github, I would strongly recommend moving there. Prepare Your POM Make sure your pom.xml contains all elements required by Sonatype, explained in Central Sync Requirements. We will deploy to Sonatype, and they will syncronize all JAR (and not only) artifacts to Maven Central. Register a Project With Sonatype Create an account in Sonatype JIRA and raise a ticket, asking to approve your groupId. This OSSRH Guide explains this step in more detail. Create and Distribute a GPG Key Create a GPG key and distribute it, as explained in this Working with PGP Signatures article. When this step is done, you should have two files: pubring.gpg and secring.gpg. Create settings.xml Create settings.xml, next to the two .gpg files created in the previous step: <settings> <profiles> <profile> <id>foo</id> <!-- give it the name of your project --> <properties> <gpg.homedir>/home/r</gpg.homedir> <gpg.keyname>9A105525</gpg.keyname> <gpg.passphrase>my-secret</gpg.passphrase> </properties> </profile> </profiles> <servers> <server> <id>sonatype</id> <username><!-- Sonatype JIRA user name --></username> <password><!-- Sonatype JIRA pwd --></password> </server> </servers> </settings> In this example, 9A105525 is the ID of your public key, and my-secret is the pass phrase you have used while generating the keys. Encrypt Security Assets Now, encrypt these three files with a Rultor public key (9AF0FA4C): gpg --keyserver hkp://pool.sks-keyservers.net --recv-keys 9AF0FA4C gpg --trust-model always -a -e -r 9AF0FA4C pubring.gpg gpg --trust-model always -a -e -r 9AF0FA4C secring.gpg gpg --trust-model always -a -e -r 9AF0FA4C settings.xml You will get three new files: pubring.gpg.asc, secring.gpg.asc and settings.xml.asc. Add them to the root directory of your project, commit and push. The files contain your secret information, but only the Rultor server can decrypt them. Add Sonatype Repositories I would recommend using jcabi-parent, as a parent pom for your project. This will make many further steps unnecessary. If you’re using jcabi-parent, skip this step. However, if you don’t use jcabi-parent, you should add these two repositories to your pom.xml: <project> [...] <distributionManagement> <repository> <id>oss.sonatype.org</id> <url>https://oss.sonatype.org/service/local/staging/deploy/maven2/</url> </repository> <snapshotRepository> <id>oss.sonatype.org</id> <url>https://oss.sonatype.org/content/repositories/snapshots</url> </snapshotRepository> </distributionManagement> </project> Configure GPG Plugin Again, I’d recommend using http://parent.jcabi.com, which configures this plugin automatically. If you’re using it, skip this step. Otherwise, add this plugin to your pom.xml: <project> [..] <build> [..] <plugins> [..] <plugin> <artifactId>maven-gpg-plugin</artifactId> <version>1.5</version> <executions> <execution> <id>sign-artifacts</id> <phase>verify</phase> <goals> <goal>sign</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Configure Versions Plugin Once again, I recommend using http://parent.jcabi.com. It configures all required plugins out-of-the-box. If you’re using it, skip this step. Otherwise, add this plugin to your pom.xml: <project> [..] <build> [..] <plugins> [..] <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>versions-maven-plugin</artifactId> <version>2.1</version> <configuration> <generateBackupPoms>false</generateBackupPoms> </configuration> </plugin> </plugins> </build> </project> Configure Sonatype Plugin Yes, you’re right, http://parent.jcabi.com will help you here as well. If you’re using it, skip this step too. Otherwise, add these four plugins to your pom.xml: <project> [..] <build> [..] <plugins> [..] <plugin> <artifactId>maven-deploy-plugin</artifactId> <configuration> <skip>true</skip> </configuration> </plugin> <plugin> <artifactId>maven-source-plugin</artifactId> <executions> <execution> <id>package-sources</id> <goals> <goal>jar</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-javadoc-plugin</artifactId> <executions> <execution> <id>package-javadoc</id> <phase>package</phase> <goals> <goal>jar</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.sonatype.plugins</groupId> <artifactId>nexus-staging-maven-plugin</artifactId> <version>1.6</version> <extensions>true</extensions> <configuration> <serverId>oss.sonatype.org</serverId> <nexusUrl>https://oss.sonatype.org/</nexusUrl> <description>${project.version}</description> </configuration> <executions> <execution> <id>deploy-to-sonatype</id> <phase>deploy</phase> <goals> <goal>deploy</goal> <goal>release</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Create Rultor Config Create a .rultor.yml file in the root directory of your project (reference page explains this format in details): decrypt: settings.xml: "repo/settings.xml.asc" pubring.gpg: "repo/pubring.gpg.asc" secring.gpg: "repo/secring.gpg.asc" release: script: | mvn versions:set "-DnewVersion=${tag}" git commit -am "${tag}" mvn clean deploy --settings /home/r/settings.xml You can compare your file with live Rultor configuration of jcabi-aspects. Run It!Now it’s time to see how it all works. Create a new ticket in the Github issue tracker, and post something like that into it (read more about Rultor commands):     @rultor release, tag is `0.1` You will get a response in a few seconds. The rest will be done by Rultor. Enjoy! BTW, if something doesn’t work as I’ve explained, don’t hesitate to submit a ticket to Rultor issue tracker. I will try to help you. Yeah, forgot to mention, Rultor is also doing two important things. First, it creates a Github release with a proper description. Second, it posts a tweet about the release, which you can retweet, to make an announcement to your followers. Both features are very convenient for me. For example: DynamoDB Local Maven Plugin, 0.7.1 released https://t.co/C3KULouuKS — rultor.com (@rultors) August 19, 2014Related Posts You may also find these posts interesting:How to Deploy to CloudBees, in One Click Deployment Script vs. Rultor How to Publish to Rubygems, in One Click Rultor + Travis Every Build in Its Own Docker ContainerReference: How to Release to Maven Central, in One Click from our JCG partner Yegor Bugayenko at the About Programming blog....
java-logo

Date/Time Formatting/Parsing, Java 8 Style

Since nearly the beginning of Java, Java developers have worked with dates and times via the java.util.Date class (since JDK 1.0) and then the java.util.Calendar class (since JDK 1.1). During this time, hundreds of thousands (or maybe millions) of Java developers have formatted and parsed Java dates and times using java.text.DateFormat and java.text.SimpleDateFormat. Given how frequently this has been done over the years, it’s no surprise that there are numerous online examples of and tutorials on parsing and formatting dates and times with these classes. The classic Java Tutorials cover these java.util and java.text classes in the Formatting lesson (Dates and Times). The new Date Time trail in the Java Tutorials covers Java 8’s new classes for dates and times and their formatting and parsing. This post provides examples of these in action. Before demonstrating Java 8 style date/time parsing/formatting with examples, it is illustrative to compare the Javadoc descriptions for DateFormat/SimpleDateFormat and DateTimeFormatter. The table that follows contains differentiating information that can be gleaned directly or indirectly from a comparison of the Javadoc for each formatting class. Perhaps the most important observations to make from this table are that the new DateTimeFormatter is threadsafe and immutable and the general overview of the APIs that DateTimeFormatter provides for parsing and formatting dates and times.Characteristic DateFormat/SimpleDateFormat DateTimeFormatterPurpose “formats and parses dates or time in a language-independent manner” “Formatter for printing and parsing date-time objects.”Primarily Used With java.util.Date java.util.Calendar java.time.LocalDate java.time.LocalTime java.time.LocalDateTime java.time.OffsetTime java.time.OffsetDateTime java.time.ZonedDateTime java.time.InstantThread Safety “Date formats are not synchronized.” “This class is immutable and thread-safe.”Direct Formatting format(Date) format(TemporalAccessor)Direct Parsing parse(String) parse(CharSequence, TemporalQuery)Indirect Formatting None [unless you use Groovy's Date.format(String) extension)] LocalDate.format(DateTimeFormatter) LocalTime.format(DateTimeFormatter) LocalDateTime.format(DateTimeFormatter) OffsetTime.format(DateTimeFormatter) OffsetDateTime.format(DateTimeFormatter) ZonedDateTime.format(DateTimeFormatter)Indirect Parsing None [unless you use deprecated Date.parse(String) or Groovy's Date.parse(String, String) extension] LocalDate.parse(CharSequence, DateTimeFormatter) LocalTime.parse(CharSequence, DateTimeFormatter) LocalDateTime.parse(CharSequence, DateTimeFormatter) OffsetTime.parse(CharSequence, DateTimeFormatter) OffsetDateTime.parse(CharSequence, DateTimeFormatter) ZonedDateTime.parse(CharSequence, DateTimeFormatter)Internationalization java.util.Locale java.util.LocaleTime Zone java.util.TimeZone java.time.ZoneId java.time.ZoneOffsetPredefined Formatters None, but does offer static convenience methods for common instances: getDateInstance() getDateInstance(int) getDateInstance(int, Locale) getDateTimeInstance() getDateTimeInstance(int, int) getDateTimeInstance(int, int, Locale) getInstance() getTimeInstance() getTimeInstance(int) getTimeInstance(int, Locale) ISO_LOCAL_DATE ISO_LOCAL_TIME ISO_LOCAL_DATE_TIME ISO_OFFSET_DATE ISO_OFFSET_TIME ISO_OFFSET_DATE_TIME ISO_ZONED_DATE_TIME BASIC_ISO_DATE ISO_DATE ISO_DATE_TIME ISO_ORDINAL_DATE ISO_INSTANTISO_WEEK_DATE RFC_1123_DATE_TIMEThe remainder of this post uses examples to demonstrate formatting and parsing dates in Java 8 with the java.time constructs. The examples will use the following string patterns and instances. /** Pattern to use for String representation of Dates/Times. */ private final String dateTimeFormatPattern = "yyyy/MM/dd HH:mm:ss z";/** * java.util.Date instance representing now that can * be formatted using SimpleDateFormat based on my * dateTimeFormatPattern field. */ private final Date now = new Date();/** * java.time.ZonedDateTime instance representing now that can * be formatted using DateTimeFormatter based on my * dateTimeFormatPattern field. * * Note that ZonedDateTime needed to be used in this example * instead of java.time.LocalDateTime or java.time.OffsetDateTime * because there is zone information in the format provided by * my dateTimeFormatPattern field and attempting to have * DateTimeFormatter.format(TemporalAccessor) instantiated * with a format pattern that includes time zone details * will lead to DateTimeException for instances of * TemporalAccessor that do not have time zone information * (such as LocalDateTime and OffsetDateTime). */ private final ZonedDateTime now8 = ZonedDateTime.now();/** * String that can be used by both SimpleDateFormat and * DateTimeFormatter to parse respective date/time instances * from this String. */ private final String dateTimeString = "2014/09/03 13:59:50 MDT"; Before Java 8, the standard Java approach for dates and times was via the Date and Calendar classes and the standard approach to parsing and formatting dates was via DateFormat and SimpleDateFormat. The next code listing demonstrates these classical approaches. Formatting and Parsing Java Dates with SimpleDateFormat /** * Demonstrate presenting java.util.Date as String matching * provided pattern via use of SimpleDateFormat. */ public void demonstrateSimpleDateFormatFormatting() { final DateFormat format = new SimpleDateFormat(dateTimeFormatPattern); final String nowString = format.format(now); out.println( "Date '" + now + "' formatted with SimpleDateFormat and '" + dateTimeFormatPattern + "': " + nowString); }/** * Demonstrate parsing a java.util.Date from a String * via SimpleDateFormat. */ public void demonstrateSimpleDateFormatParsing() { final DateFormat format = new SimpleDateFormat(dateTimeFormatPattern); try { final Date parsedDate = format.parse(dateTimeString); out.println("'" + dateTimeString + "' is parsed with SimpleDateFormat as " + parsedDate); } // DateFormat.parse(String) throws a checked exception catch (ParseException parseException) { out.println( "ERROR: Unable to parse date/time String '" + dateTimeString + "' with pattern '" + dateTimeFormatPattern + "'."); } } With Java 8, the preferred date/time classes are no longer in the java.util package and the preferred date/time handling classes are now in the java.time package. Similarly, the preferred date/time formatting/parsing classes are no longer in the java.text package, but instead come from the java.time.format package. The java.time package offers numerous classes for modeling dates and/or times. These include classes that model dates only (no time information), classes that model times only (no date information), classes that model date and time information, classes that use timezone information, and classes that do not incorporate time zone information. The approach for formatting and parsing these is generally similar, though the characteristics of the class (whether it supports date or time or timezone information, for example) affects which patterns that can be applied. In this post, I use the ZonedDateTime class for my examples. The reason for this choice is that it includes date, time, and time zone information and allows me to use a matching pattern that involves all three of those characteristics like a Date or Calendar instance does. This makes it easier to compare the old and new approaches. The DateTimeFormatter class provides ofPattern methods to provide an instance of DateTimeFormatter based on the provided date/time pattern String. One of the format methods can then be called on that instance of DateTimeFormatter to get the date and/or time information formatted as a String matching the provided pattern. The next code listing illustrates this approach to formatting a String from a ZonedDateTime based on the provided pattern. Formatting ZonedDateTime as String /** * Demonstrate presenting ZonedDateTime as a String matching * provided pattern via DateTimeFormatter and its * ofPattern(String) method. */ public void demonstrateDateTimeFormatFormatting() { final DateTimeFormatter formatter = DateTimeFormatter.ofPattern(dateTimeFormatPattern); final String nowString = formatter.format(now8); out.println( now8 + " formatted with DateTimeFormatter and '" + dateTimeFormatPattern + "': " + nowString); } Parsing a date/time class from a String based on a pattern is easily accomplished. There are a couple ways this can be accomplished. One approach is to pass the instance of DateTimeFormatter to the static ZonedDateTime.parse(CharSequence, DateTimeFormatter) method, which returns an instance of ZonedDateTime derived from the provided character sequence and based on the provided pattern. This is illustrated in the next code listing. Parsing ZonedDateTime from String Using Static ZonedDateTime.parse Method /** * Demonstrate parsing ZonedDateTime from provided String * via ZonedDateTime's parse(String, DateTimeFormatter) method. */ public void demonstrateDateTimeFormatParsingTemporalStaticMethod() { final DateTimeFormatter formatter = DateTimeFormatter.ofPattern(dateTimeFormatPattern); final ZonedDateTime zonedDateTime = ZonedDateTime.parse(dateTimeString, formatter); out.println( "'" + dateTimeString + "' is parsed with DateTimeFormatter and ZonedDateTime.parse as " + zonedDateTime); } A second approach to parsing ZonedDateTime from a String is via DateTimeFormatter‘s parse(CharSequence, TemporalQuery<T>) method. This is illustrated in the next code listing which also provides an opportunity to demonstrate use of a Java 8 method reference (see ZonedDateTime::from). Parsing ZonedDateTime from String Using DateTimeFormatter.parse Method /** * Demonstrate parsing ZonedDateTime from String * via DateTimeFormatter.parse(String, TemporaryQuery) * with the Temple Query in this case being ZonedDateTime's * from(TemporalAccessor) used as a Java 8 method reference. */ public void demonstrateDateTimeFormatParsingMethodReference() { final DateTimeFormatter formatter = DateTimeFormatter.ofPattern(dateTimeFormatPattern); final ZonedDateTime zonedDateTime = formatter.parse(dateTimeString, ZonedDateTime::from); out.println( "'" + dateTimeString + "' is parsed with DateTimeFormatter and ZoneDateTime::from as " + zonedDateTime); } Very few projects have the luxury of being a greenfield project that can start with Java 8. Therefore, it’s helpful that there are classes that connect the pre-JDK 8 date/time classes with the new date/time classes introduced in JDK 8. One example of this is the ability of JDK 8’s DateTimeFormatter to provide a descending instance of the pre-JDK 8 abstract Format class via the DateTimeFormatter.toFormat() method. This is demonstrated in the next code listing. Accessing Pre-JDK 8 Format from JDK 8’s DateTimeFormatter /** * Demonstrate formatting ZonedDateTime via DateTimeFormatter, * but using implementation of Format. */ public void demonstrateDateTimeFormatAndFormatFormatting() { final DateTimeFormatter formatter = DateTimeFormatter.ofPattern(dateTimeFormatPattern); final Format format = formatter.toFormat(); final String nowString = format.format(now8); out.println( "ZonedDateTime " + now + " formatted with DateTimeFormatter/Format (and " + format.getClass().getCanonicalName() + ") and '" + dateTimeFormatPattern + "': " + nowString); } The Instant class is especially important when working with both pre-JDK 8 Date and Calendar classes in conjunction with the new date and time classes introduced with JDK 8. The reason Instant is so important is that java.util.Date has methods from(Instant) and toInstant() for getting a Date from an Instant and getting an Instant from a Date respectively. Because Instant is so important in migrating pre-Java 8 date/time handling to Java 8 baselines, the next code listing demonstrates formatting and parsing instances of Instant. Formatting and Parsing Instances of Instant /** * Demonstrate formatting and parsing an instance of Instant. */ public void demonstrateDateTimeFormatFormattingAndParsingInstant() { // Instant instances don't have timezone information final Instant instant = now.toInstant(); final DateTimeFormatter formatter = DateTimeFormatter.ofPattern( dateTimeFormatPattern).withZone(ZoneId.systemDefault()); final String formattedInstance = formatter.format(instant); out.println( "Instant " + instant + " formatted with DateTimeFormatter and '" + dateTimeFormatPattern + "' to '" + formattedInstance + "'"); final Instant instant2 = formatter.parse(formattedInstance, ZonedDateTime::from).toInstant(); out.println(formattedInstance + " parsed back to " + instant2); } All of the above examples come from the sample class shown in the next code listing for completeness. DateFormatDemo.java package dustin.examples.numberformatdemo;import static java.lang.System.out;import java.text.DateFormat; import java.text.Format; import java.text.ParseException; import java.text.SimpleDateFormat; import java.time.Instant; import java.time.ZoneId; import java.time.ZonedDateTime; import java.time.format.DateTimeFormatter; import java.util.Date;/** * Demonstrates formatting dates as strings and parsing strings * into dates and times using pre-Java 8 (java.text.SimpleDateFormat) * and Java 8 (java.time.format.DateTimeFormatter) mechanisms. */ public class DateFormatDemo { /** Pattern to use for String representation of Dates/Times. */ private final String dateTimeFormatPattern = "yyyy/MM/dd HH:mm:ss z";/** * java.util.Date instance representing now that can * be formatted using SimpleDateFormat based on my * dateTimeFormatPattern field. */ private final Date now = new Date();/** * java.time.ZonedDateTime instance representing now that can * be formatted using DateTimeFormatter based on my * dateTimeFormatPattern field. * * Note that ZonedDateTime needed to be used in this example * instead of java.time.LocalDateTime or java.time.OffsetDateTime * because there is zone information in the format provided by * my dateTimeFormatPattern field and attempting to have * DateTimeFormatter.format(TemporalAccessor) instantiated * with a format pattern that includes time zone details * will lead to DateTimeException for instances of * TemporalAccessor that do not have time zone information * (such as LocalDateTime and OffsetDateTime). */ private final ZonedDateTime now8 = ZonedDateTime.now();/** * String that can be used by both SimpleDateFormat and * DateTimeFormatter to parse respective date/time instances * from this String. */ private final String dateTimeString = "2014/09/03 13:59:50 MDT";/** * Demonstrate presenting java.util.Date as String matching * provided pattern via use of SimpleDateFormat. */ public void demonstrateSimpleDateFormatFormatting() { final DateFormat format = new SimpleDateFormat(dateTimeFormatPattern); final String nowString = format.format(now); out.println( "Date '" + now + "' formatted with SimpleDateFormat and '" + dateTimeFormatPattern + "': " + nowString); }/** * Demonstrate parsing a java.util.Date from a String * via SimpleDateFormat. */ public void demonstrateSimpleDateFormatParsing() { final DateFormat format = new SimpleDateFormat(dateTimeFormatPattern); try { final Date parsedDate = format.parse(dateTimeString); out.println("'" + dateTimeString + "' is parsed with SimpleDateFormat as " + parsedDate); } // DateFormat.parse(String) throws a checked exception catch (ParseException parseException) { out.println( "ERROR: Unable to parse date/time String '" + dateTimeString + "' with pattern '" + dateTimeFormatPattern + "'."); } }/** * Demonstrate presenting ZonedDateTime as a String matching * provided pattern via DateTimeFormatter and its * ofPattern(String) method. */ public void demonstrateDateTimeFormatFormatting() { final DateTimeFormatter formatter = DateTimeFormatter.ofPattern(dateTimeFormatPattern); final String nowString = formatter.format(now8); out.println( now8 + " formatted with DateTimeFormatter and '" + dateTimeFormatPattern + "': " + nowString); }/** * Demonstrate parsing ZonedDateTime from provided String * via ZonedDateTime's parse(String, DateTimeFormatter) method. */ public void demonstrateDateTimeFormatParsingTemporalStaticMethod() { final DateTimeFormatter formatter = DateTimeFormatter.ofPattern(dateTimeFormatPattern); final ZonedDateTime zonedDateTime = ZonedDateTime.parse(dateTimeString, formatter); out.println( "'" + dateTimeString + "' is parsed with DateTimeFormatter and ZonedDateTime.parse as " + zonedDateTime); }/** * Demonstrate parsing ZonedDateTime from String * via DateTimeFormatter.parse(String, TemporaryQuery) * with the Temple Query in this case being ZonedDateTime's * from(TemporalAccessor) used as a Java 8 method reference. */ public void demonstrateDateTimeFormatParsingMethodReference() { final DateTimeFormatter formatter = DateTimeFormatter.ofPattern(dateTimeFormatPattern); final ZonedDateTime zonedDateTime = formatter.parse(dateTimeString, ZonedDateTime::from); out.println( "'" + dateTimeString + "' is parsed with DateTimeFormatter and ZoneDateTime::from as " + zonedDateTime); }/** * Demonstrate formatting ZonedDateTime via DateTimeFormatter, * but using implementation of Format. */ public void demonstrateDateTimeFormatAndFormatFormatting() { final DateTimeFormatter formatter = DateTimeFormatter.ofPattern(dateTimeFormatPattern); final Format format = formatter.toFormat(); final String nowString = format.format(now8); out.println( "ZonedDateTime " + now + " formatted with DateTimeFormatter/Format (and " + format.getClass().getCanonicalName() + ") and '" + dateTimeFormatPattern + "': " + nowString); }/** * Demonstrate formatting and parsing an instance of Instant. */ public void demonstrateDateTimeFormatFormattingAndParsingInstant() { // Instant instances don't have timezone information final Instant instant = now.toInstant(); final DateTimeFormatter formatter = DateTimeFormatter.ofPattern( dateTimeFormatPattern).withZone(ZoneId.systemDefault()); final String formattedInstance = formatter.format(instant); out.println( "Instant " + instant + " formatted with DateTimeFormatter and '" + dateTimeFormatPattern + "' to '" + formattedInstance + "'"); final Instant instant2 = formatter.parse(formattedInstance, ZonedDateTime::from).toInstant(); out.println(formattedInstance + " parsed back to " + instant2); }/** * Demonstrate java.text.SimpleDateFormat and * java.time.format.DateTimeFormatter. * * @param arguments Command-line arguments; none anticipated. */ public static void main(final String[] arguments) { final DateFormatDemo demo = new DateFormatDemo(); out.print("\n1: "); demo.demonstrateSimpleDateFormatFormatting(); out.print("\n2: "); demo.demonstrateSimpleDateFormatParsing(); out.print("\n3: "); demo.demonstrateDateTimeFormatFormatting(); out.print("\n4: "); demo.demonstrateDateTimeFormatParsingTemporalStaticMethod(); out.print("\n5: "); demo.demonstrateDateTimeFormatParsingMethodReference(); out.print("\n6: "); demo.demonstrateDateTimeFormatAndFormatFormatting(); out.print("\n7: "); demo.demonstrateDateTimeFormatFormattingAndParsingInstant(); } } The output from running the above demonstration is shown in the next screen snapshot.Conclusion The JDK 8 date/time classes and related formatting and parsing classes are much more straightforward to use than their pre-JDK 8 counterparts. This post has attempted to demonstrate how to apply these new classes and to take advantage of some of their benefits.Reference: Date/Time Formatting/Parsing, Java 8 Style from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
jsf-logo

How JSF Works and how to Debug it – is polyglot an alternative?

JSF is not what we often think it is. It’s also a framework that can be somewhat tricky to debug, specially when first encountered. In this post let’s go over on why that is and provide some JSF debugging techniques. We will go through the following topics:JSF is not what we often think The difficulties of JSF debugging How to debug JSF systematically How JSF Works – The JSF lifecycle Debugging an Ajax request from browser to server and back Debugging the JSF frontend Javascript code Final thoughts – alternatives? (questions to the reader)JSF is not what we often think JSF looks on first look like an enterprise Java/XML frontend framework, but under the hood it really isn’t. It’s really a polyglot Java/Javascript framework, where the client Javascript part is non-neglectable and also important to understand it. It also has good support for direct HTML/CSS use. JSF developers are on ocasion already polyglot developers, whose primary language is Java but still need to use ocasionally Javascript. The difficulties of JSF debugging When comparing JSF to GWT and AngularJS in a previous post, I found that the (most often used) approach that the framework takes of abstracting HTML and CSS from the developer behind XML adds to the difficulty of debugging, because it creates an extra level of indirection. A more direct approach of using HTML/CSS directly is also possible, but it seems enterprise Java developers tend to stick to XML in most cases, because it’s a more familiar technology. Also another problem is that the client side Javascript part of the framework/libraries is not very well documented, and it’s often important to understand what is going on. The only way to debug JSF systematically When first encountering JSF, I first tried to approach it from a Java, XML and documentation only. While I could do a part of the work that way, there where frequent situations where that approach was really not sufficient. The conclusion that I got to is that in order to be able to debug JSF applications effectively, an understanding of the following is needed:HTML CSS Javascript HTTP Chrome Dev Tools, Firebug or equivalent The JSF LifecycleThis might sound surprising to developers that work mostly in Java/XML, but this web-centric approach to debugging JSF is the only way that I managed to tackle many requirements that needed some significant component customization, or to be able to fix certain bugs. Let’s start by understanding the inner workings of JSF, so that we can debug it better. The JSF take on MVC The way JSF approaches MVC is that the whole 3 components reside on the server side:The Model is a tree of plain Java objects The View is a server side template defined in XML that is read to build an in-memory view definition The Controller is a Java servlet, that receives each request and processes them through a series of stepsThe browser is assumed to be simply a rendering engine for the HTML generated at server side. Ajax is achieved by submitting parts of the page for server processing, and requesting a server to ‘repaint’ only portions of the screen, without navigating away from the page. The JSF Lifecycle Once an HTTP request reaches the backend, it gets caught by the JSF Controller that will then process it. The request goes through a series of phases known as the JSF lifecycle, which is essential to understand how JSF works:Design Goals of the JSF Lifecycle The whole point of the lifecycle is to manage MVC 100% on the server side, using the browser as a rendering platform only. The initial idea was to decouple the rendering platform from the server-side UI component model, in order to allow to replace HTML with alternative markup languages by swapping the Render Response phase. This was in the early 2000’s when HTML could be soon replaced by XML-based alternatives (that never came to be), and then HTML5 came along. Also browsers where much more qwirkier than what they are today, and the idea of cross-browser Javascript libraries was not widespread. So let’s go through each phase and see how to debug it if needed, starting in the browser. Let’s base ourselves in a simple example that uses an Ajax request. A JSF 2 Hello World Example The following is a minimal JSF 2 page, that receives an input text from the user, sends the text via an Ajax request to the backend and refreshes only an output label: <h:body> <h3>JSF 2.2 Hello World Example</h3> <h:form> <h:outputtext id="output" value="#{simpleFormBean.inputText}"></h:outputtext> <h:inputtext id="input" value="#{simpleFormBean.inputText}"></h:inputtext> <h:commandbutton value="Submit" action="index"> <f:ajax execute="input" render="output"> </f:ajax></h:commandbutton> </h:form> </h:body>The page looks like this:Following one Ajax request – to the server and back Let’s click submit in order to trigger the Ajax request, and use the Chrome Dev Tools Network tab (right click and inspect any element on the page).What goes over the wire? This is what we see in the Form Data section of the request: j_idt8:input: Hello World javax.faces.ViewState: -2798727343674530263:954565149304692491 javax.faces.source: j_idt8:j_idt9 javax.faces.partial.event: click javax.faces.partial.execute: j_idt8:j_idt9 j_idt8:input javax.faces.partial.render: j_idt8:output javax.faces.behavior.event: action javax.faces.partial.ajax:true This request says: The new value of the input field is “Hello World”, send me a new value for the output field only, and don’t navigate away from this page. Let’s see how this can be read from the request. As we can see, the new values of the form are submitted to the server, namely the “Hello World” value. This is the meaning of the several entries:javax.faces.ViewState identifies the view from which the request was made. The request is an Ajax request, as indicated by the flag javax.faces.partial.ajax, The request was triggered by a click as defined in javax.faces.partial.event.But what are those j_ strings ? Those are space separated generated identifiers of HTML elements. For example this is how we can see what is the page element corresponding to j_idt8:input, using the Chrome Dev Tools:There are also 3 extra form parameters that use these identifiers, that are linked to UI components:javax.faces.source: The identifier of the HTML element that originated this request, in this case the Id of the submit button. javax.faces.execute: The list of identifiers of the elements whose values are sent to the server for processing, in this case the input text field. javax.faces.render: The list of identifiers of the sections of the page that are to be ‘repainted’, in this case the output field only.But what happens when the request hits the server ? JSF lifecycle – Restore View Phase Once the request reaches the server, the JSF controller will inspect the javax.faces.ViewState and identify to which view it refers. It will then build or restore a Java representation of the view, that is somehow similar to the document definition in the browser side. The view will be attached to the request and used throughout. There is usually little need to debug this phase during application development. JSF Lifecycle – Apply Request Values The JSF Controller will then apply to the view widgets the new values received via the request. The values might be invalid at this point. Each JSF component gets a call to it’s decode method in this phase. This method will retrieve the submitted value for the widget in question from the HTTP request and store it on the widget itself. To debug this, let’s put a breakpoint in the decode method of the HtmlInputText class, to see the value “Hello World”:Notice the conditional breakpoint using the HTML clientId of the field we want. This would allow to quickly debug only the decoding of the component we want, even in a large page with many other similar widgets. Next after decoding is the validation phase. JSF Lifecycle – Process Validations In this phase, validations are applied and if the value is found to be in error (for example a date is invalid), then the request bypasses Invoke Application and goes directly to Render Response phase. To debug this phase, a similar breakpoint can be put on method processValidators, or in the validators themselves if you happen to know which ones or if they are custom. JSF Lifecycle – Update Model In this phase, we know all the submitted values where correct. JSF can now update the view model by applying the new values received in the requests to the plain Java objects in the view model. This phase can be debugged by putting a breakpoint in the processUpdates method of the component in question, eventually using a similar conditional breakpoint to break only on the component needed. JSF Lifecycle – Invoke Application This is the simplest phase to debug. The application now has an updated view model, and some logic can be applied on it. This is where the action listeners defined in the XML view definition (the ‘action’ properties and the listener tags) are executed. JSF Lifecycle – Render Response This is the phase that I end up debugging the most: why is the value not being displayed as we expect it, etc, it all can be found here. In this phase the view and the new model values will be transformed from Java objects into HTML, CSS and eventually Javascript and sent back over the wire to the browser. This phase can be debugged using breakpoints in the encodeBegin,encodeChildren and encodeEnd methods of the component in question. The components will either render themselves or delegate rendering to a Renderer class. Back in the browser It was a long trip, but we are back where we started! This is how the response generated by JSF looks once received in the browser: <!--?xml version='1.0' encoding='UTF-8'?--> <partial-response> <changes> <update id="j_idt8:output"><span id="j_idt8:output"></span></update> <update id="javax.faces.ViewState">-8188482707773604502:6956126859616189525></update> </changes> </partial-response>What the Javascript part of the framework will do is to take the contents of the partial response, update by update. Using the Id of the update, the client side JSF callback will search for a component with that Id, delete it from the document and replace it with the new updated version. In this case, “Hello World” will show up on the label next to the Input text field! And so thats how JSF works under the hood. But what about if we need to debug the Javascript part of the framework? Debugging the JSF Javascript Code The Chrome Dev Tools can help debug the client part. For example let’s say that we want to halt the client when an Ajax request is triggered. We need to go to the sources tab, add an XHR (Ajax) breakpoint and trigger the browser action. The debugger will stop and the call stack can be examined:For some frameworks like Primefaces, the Javascript sources might be minified (non human-readable) because they are optimized for size. To solve this, download the source code of the library and do a non minified build of the jar. There are usually instructions for this, otherwise check the project poms. This will install in your Maven repository a jar with non minified sources for debugging. The UI Debug tag: The ui:debug tag allows to view a lot of debugging information using a keyboard shortcut, see here for further details. Final Thoughts JSF is very popular in the enterprise Java world, and it handles a lot of problems well, specially if the UI designers take into account the possibilities of the widget library being used. The problem is that there are usually feature requests that force us to dig deeper into the widgets internal implementation in order to customize them, and this requires HTML, CSS, Javascript and HTTP plus JSF lifecycle knowledge. Is polyglot an alternative? We can wonder that if developers have to know a fair amount about web technologies in order to be able to debug JSF effectively, then it would be simpler to build enterprise front ends (just the client part) using those technologies directly instead. It’s possible that a polyglot approach of a Java backend plus a Javascript-only frontend could be proved effective in a nearby future, specially using some sort of a client side MVC framework like Angular. This would require learning more Javascript, (have a look at Javascript for Java developers post if curious), but this is already often necessary to do custom widget development in JSF anyway. Conclusions and some questions if you have the time Thanks for reading, please take a moment to share your thoughts on these matters on the comments bellow:do you believe polyglot development (Java/Javascript) is a viable alternative in general, and in your workplace in particular? Did you find one of the GWT-based frameworks (plain GWT, Vaadin, Errai), or the Play Framework to be easier to use and of better productivity?Reference: How JSF Works and how to Debug it – is polyglot an alternative? from our JCG partner Aleksey Novik at the The JHades Blog blog....
enterprise-java-logo

Overlord – The One Place To Rule And Manage your APIs

We’re living in a more and more distributed world today. Instead of having individual, departmental projects running on some hardware below a random desk, today’s computer systems run at large scale, centralized or even distributed. The needs for monitoring and managing never changed but got far more complex over time. If you’d put all those cross functional features into a bucket it would most likely be called “Governance”. This can happen on many levels. People, processes and of course infrastructure components.         What is Overlord?  Overlord is a a set of sub-projects which deal with different aspects of system governance. All four sub-projects are so called “upstream” projects for JBoss Fuse Service Works. But Service Works is even more, so let’s just focus on the four for now.   SRAMP Overlord S-RAMP is a full-featured artifact repository comprised of a common data model, powerful query language, multiple rich interfaces, flexible integration, and useful tools. It aims to provide a full implementation of the OASIS S-RAMP specification. Developer Links:Issue Tracker Source-Code on GitHub Documentation @OverlordSRAMP 18-Month RoadmapDTGov This component provides the capability to manage the lifecycle of systems from inception through deployment through subsequent change management. A flexible workflow driven approach is used to enable organizations to customize governance to fit the way they work. Developer Links:Issue Tracker Source-Code on GitHub Documentation @OverlordDTGov 18-Month RoadmapRuntime Government (RTGov) This component provides the infrastructure to capture service activity information and then correlate, analyse and finally present the information in a form that can be used by a business to police Business/Service Level Agreements, and optimize their business. Developer Links:Issu Tracker Source-Code on GitHub Documentation @OverlordRTGovAPI Management If you want to centralize the governance of your APIs, this is the project for you! The API Management project provides a rich management layer used to configure the governance policies you want applied to your APIs. Once configured, the API Management runtime Policy Engine can run as part of a standard Gateway or embedded in any application. Developer Links:Issue Tracker Source-Code on GitHub Documentation @OverlordAPIMan 18-Month RoadmapWhat’s going on lately? Overlord just got a brand new website up and running. Have a look at it and don’t forget to give feedback or work on it, as it is also open source you are free to fork it an send a pull request. Make sure to look at the contributor guidelines before.Reference: Overlord – The One Place To Rule And Manage your APIs from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close