Featured FREE Whitepapers

What's New Here?

java-logo

Testing JVM server-side JavaScript with Jasmine, Spock and Nashorn

JavaScript usage is not limited to client-side code in browser or NodeJS powered server-side code. Many JVM based projects are using it as internal scripting language. Testing this sort of functionality is neither straightforward nor standard. In this post I intend to demonstrate an approach for testing JavaScript in server-side JVM environment using mature tools like Jasmine, Spock and Nashorn. Using JavaScript as scripting engine inside JVM application has significant difference comparing to client-side coding. And unfortunately there’s no industrial standard tools nowadays for testing it.     Regarding existing approaches found in Internet I’d like to highlight following disadvantages:lack of integration with build and continuous integration tools (Maven, Gradle, Jenkins, etc.) insufficient cooperation with IDEsno possibility to run single suite or test from IDE unable to view test execution reports from IDEtight coupling to browser environment no possibility of using customized JavaScript executorsAs far as I’ve seen most of the projects test their embedded business scripts by calling JS engine runner, passing script under test to it and doing assertion by inspecting side-effects on engine or mocks after script execution. Those sort of approaches usually have similar drawbacks:hard to stub or mock something in JS code, usually ending up hacking on JS prototype need too much orchestration for mocking environment for script hard to organize tests into suites and report test execution errors previous causes creation of custom test suite frameworks for particular project not leveraging existing JavaScript testing tools and frameworksSo, driven by the need for comfortable embedded JavaScript testing in JVM projects I’ve created this sample setup. To fulfill our goals next tools will be used.Jasmine is one of the most known TDD/BDD tools for JavaScript Spock is great testing framework for JVM powered by Junit and Groovy Nashorn is modern scripting engine introduced in JDK8Customized JavaScript runner (Nashorn based) There’s no need to conform standards in non-browser JS environments, so usually developers extend scripting engine with custom functions, built-in variables etc. It is extremely important to use exactly the same runner both for production and testing purposes. Let’s consider we have such customized runner, accepting script name and map of predefined variables as parameters and returning resulting value of the executed script. JavaScriptRunner.java public class JavaScriptRunner { public static Object run(String script, Map<String, Object> params) throws Exception { ScriptEngineManager factory = new ScriptEngineManager(); ScriptEngine engine = factory.getEngineByName("nashorn"); engine.getBindings(ScriptContext.ENGINE_SCOPE).putAll(params); return engine.eval(new InputStreamReader(JavaScriptRunner.class.getResourceAsStream(script))); (1) } }1 script source is searched in classpath.  Jasmine setup To start using Jasmine framework we need:download Jasmine and unpack it to /jasmine/jasmine-2.1.2 folder in project resources directory custom bootstrap script, since Jasmine doesn’t support JVM based platformsjasmine2-bootstrap.js var loadFromClassPath = function(path) { (1) load(Java.type("ua.eshepelyuk.blog.nashorn.Jasmine2Specification").class.getResource(path).toExternalForm()); };var window = this;loadFromClassPath("/jasmine/jasmine-2.1.2/jasmine.js"); loadFromClassPath("/jasmine/jasmine2-html-stub.js"); (2) loadFromClassPath("/jasmine/jasmine-2.1.2/boot.js"); load({script: __jasmineSpec__, name: __jasmineSpecName__}); (3)onload(); (4)jsApiReporter.specs(); (5)1 helper function resolving script path from classpath location.2 Nashorn specific code adjusting Jasmine for non-browser environments. Not a part of Jasmine distribution.3 loading test suite source code, see next section for details.4 faking browser load event, that should trigger test suite execution.5 this value will be returned as script result.  Transform Jasmine report into Spock tests Having JS executor and bootstrap script for Jasmine we could create JUnit test to iterate over suite results and check if all are successful. But it will become a nightmare to understand which particular test had failed and what is the reason of failure. What we’d really like to have is ability to represent each Jasmine specification as JUnit test, so any Java tool can pick up and inspect the results. Here why Spock could be the answer to the problem, with its Data Driven Testing that allows developer to declare list of input data and for each item of that dataset new test will be created and executed. This is very similar to Parametrized Test of Junit but much more powerful implementation. So the idea will be to consider Jasmine test suite results obtained after running bootstrap script as array of input data, whose every item will be passed to Spock test. Then test itself will provide assertion to report successful and failed tests properly, i.e. assertion should check status of Jasmine specification.if status is pending or passed, this means specification is either ignored or successful otherwise Spock test should throw assertion error, populating assertion exception populated with failures messages reported by JasmineJasmine2Specification.groovy abstract class Jasmine2Specification extends Specification { @Shared def jasmineResultsdef setupSpec() { def scriptParams = [ "__jasmineSpec__" : getMetaClass().getMetaProperty("SPEC").getProperty(null), (1) "__jasmineSpecName__": "${this.class.simpleName}.groovy" ] jasmineResults = JavaScriptRunner.run("/jasmine/jasmine2-bootstrap.js", scriptParams) (2) }def isPassed(def specRes) {specRes.status == "passed" || specRes.status == "pending"}def specErrorMsg(def specResult) { specResult.failedExpectations .collect {it.value}.collect {it.stack}.join("\n\n\n") }@Unroll def '#specName'() { expect: assert isPassed(item), specErrorMsg(item) (3) where: item << jasmineResults.collect { it.value } specName = (item.status != "pending" ? item.fullName : "IGNORED: $item.fullName") (4) } }1 exposing source code of Jasmine suite as jasmineSpec variable, accessible to JS executor.2 actual execution of Jasmine suite.3 for each suite result we assert either it is succeeded, throwing assertion error with Jasmine originated message on failure.4 additional data provider variable to highlight ignored tests.  Complete example Let’s create test suite for simple JavaScript function. mathUtils.js var add = function add(a, b) { return a + b; }; Using base class from previous step we could create Spock suite containing JavaScript tests. To demonstrate all the possibilities of our solution we will create successful, failed and ignored test. MathUtilsTest.groovy class MathUtilsTest extends Jasmine2Specification { static def SPEC = """ (1) loadFromClassPath("/js/mathUtils.js"); (2) describe("suite 1", function() { it("should pass", function() { expect(add(1, 2)).toBe(3); }); it("should fail", function() { expect(add(1, 2)).toBe(3); expect(add(1, 2)).toBe(0); }); xit("should be ignored", function() { expect(add(1, 2)).toBe(3); }); }) """ }1 actual code of Jasmine suite is represented as a String variable.2 loading module under test using function inherited from jasmine-bootstrap.js.   IntelliJ Idea language injection Although this micro framework should work in all the IDEs the most handy usage of it will be within IntelliJ IDEA thanks to its language injection. The feature allows to embed arbitrary language into file created in different programming language. So we could have JavaScript code block embedded into Spock specification written in Groovy.  Pros and cons of the solution Advantagesusage of industry standard testing tools for both languages seamless integration with build tools and continuous integration tools ability to run single suite from IDE run single test from the particular suite, thanks to focused feature of JasmineDisadvantagesno clean way of detecting particular line of source code in case of test exception a little bit IntelliJ IDEA oriented setupP.S. For this sample project I’ve used modern Nashorn engine from JDK8. But in fact there’s no limitation on this. The same approach was successfully applied for projects using older Rhino engine. And then again, Jasmine is just my personal preference. With additional work code could be adjusted to leverage Mocha, QUnit and so on.Full project’s code is available at My GitHub.Reference: Testing JVM server-side JavaScript with Jasmine, Spock and Nashorn from our JCG partner Evgeny Shepelyuk at the jk’s blog blog....
java-logo

cjmx: A command-line version of JConsole

JConsole is a nice tool when it comes to monitoring a running Java application. But when it is not possible to connect to a JVM with JConsole directly (due to network restrictions for example) and SSH tunneling is not possible, then it would be great to have a command line version of JConsole. jcmx is such a command line version of JConsole. After having downloaded the single jar file cjmx_2.10-2.1.0-app.jar you can start it by including the tools.jar into the classpath:       java -cp $JAVA_HOME/lib/tools.jar:cjmx_2.10-2.1.0-app.jar cjmx.Main This will open a “JMX shell” with the following basic commands:help: This shows a basic help screen that explains the available commands. jps/list: Like the jps tool from the JDK this command prints out all java processes with their process id. connect: You can use this command to connect to a running JVM process. format: Let’s you specify whether you want your output in a simple text format or as a JSON string. exit: Quits the application.To learn more about cjmx let us start a session and connect to the JVM that is running cjmx itself: > jps 13198 cjmx.Main > connect 13198 Connected to local virtual machine 13198 Connection id: rmi://0:0:0:0:0:0:0:1 2 Default domain: DefaultDomain 5 domains registered consisting of 19 total MBeans > describe disconnect exit format help invoke mbeans names names sample select status After the last appearance of > you see a great feature of cjmx: auto-completion. Every time you do not know which commands are available, you can just type [TAB] and cjmx will list them. This even works for MBean names as we will see. Now that we are connected to our JVM we can let cjmx describe an available MBean. With auto-completion we can just start typing describe '[TAB] to retrieve a list of all available packages: > describe ' : JMImplementation: com.sun.management: java.lang: java.nio: java.util.logging: This way we can dig through the MBean names until we have found what we are looking for. In this example we are interested in the MBean ‘java.lang:type=OperatingSystem': > describe 'java.lang:type=OperatingSystem' Object name: java.lang:type=OperatingSystem ------------------------------------------- Description: Information on the management interface of the MBeanAttributes: MaxFileDescriptorCount: long OpenFileDescriptorCount: long FreePhysicalMemorySize: long CommittedVirtualMemorySize: long FreeSwapSpaceSize: long ProcessCpuLoad: double ProcessCpuTime: long SystemCpuLoad: double TotalPhysicalMemorySize: long TotalSwapSpaceSize: long AvailableProcessors: int Arch: String SystemLoadAverage: double Name: String Version: String ObjectName: ObjectName As we can see, the MBean ‘java.lang:type=OperatingSystem’ provides information about the number of open files and the current CPU load, etc.. So let’s query the number of open files by invoking the command mbeans with the name of the MBean as well as the sub-command select and the MBean’s attribute: > mbeans 'java.lang:type=OperatingSystem' select OpenFileDescriptorCount java.lang:type=OperatingSystem ------------------------------ OpenFileDescriptorCount: 35 We can even query all available attributes by using the star instead of the concrete name of an attribute. Please note that using the cursor up key recalls the last issued command, hence we do not have to type it again. Instead we just replace the attribute’s name with the star: > mbeans 'java.lang:type=OperatingSystem' select * java.lang:type=OperatingSystem ------------------------------ MaxFileDescriptorCount: 10240 OpenFileDescriptorCount: 36 ... By using the sub-command invoke we can even invoke MBean methods like in the following example: > mbeans 'java.lang:type=Memory' invoke gc() java.lang:type=Memory: null Now that we know how to query attributes and invoke methods, we can start to script this functionality in order to monitor the application. To support this kind of scripting, cjmx provides the feature that one can pass all “commands” also as an argument to the application itself, hence you can invoke cjmx in the following way (where <PID> has to be replaced by a concrete process id of a running JVM): java -cp $JAVA_HOME/lib/tools.jar:cjmx_2.10-2.1.0-app.jar cjmx.Main &amp;lt;PID&amp;gt; &amp;quot;mbeans 'java.lang:type=OperatingSystem' select OpenFileDescriptorCount&amp;quot; java.lang:type=OperatingSystem ------------------------------ OpenFileDescriptorCount: 630 With this knowledge we can write a simple bash script that queries the JVM each second for the number of open files: #!/bin/bash while [ true ] ; do echo `date` | tr -d '\n' java -cp /usr/java/default/lib/tools.jar:cjmx_2.10-2.1.0-app.jar cjmx.Main $1 &amp;quot;mbeans 'java.lang:type=OperatingSystem' select OpenFileDescriptorCount&amp;quot;|grep OpenFileDescriptorCount|cut -f 2 -d : sleep 1 done This produces each second a new line with a timestamp and the the current number of open files. When redirected into a file, we have a simple log file and can evaluate it later on. Conclusion: cjmx is a great alternative to JConsole when the latter cannot be used due to network restrictions on a server machine. The ability to even issue commands by passing them on the command line makes it suitable for small monitoring scripts.Reference: cjmx: A command-line version of JConsole from our JCG partner Martin Mois at the Martin’s Developer World blog....
java-logo

Java 8 StringJoiner

At the release of Java 8 the most attention went to the Lamda’s, the new Date API and the Nashorn Javascript engine. In the shade of these, there are smaller but also interesting changes. Amongst them is the introduction of a StringJoiner. The StringJoiner is a utility to delimit a list of characters or strings. You may recognize the code below:             String getString(List<String> items) StringBuilder sb = new StringBuilder(); for(String item : items) { if(sb.length != 0) { sb.append(","); } sb.append(item); } return sb.toString(); } This can be replaced by these lines in Java 8: String getString(List<String> items) { StringJoiner stringJoiner = new StringJoiner(", "); for(String item : items) { stringJoiner.add(item); } return stringJoiner.toString(); } If you already know how to use streams, the following code will reduce some obsolete lines. String getString(List<String> items) { StringJoiner stringJoiner = new StringJoiner(", "); items.stream().forEach(stringJoiner::add); return stringJoiner.toString(); } Another valuable addition is to set a prefix and a suffix. They can be set as second and third parameter in the StringJoiner constructor. For example: String getString(List<String> items) { StringJoiner stringJoiner = new StringJoiner(", ", "<<", ">>"); items.stream().forEach(stringJoiner::add); return stringJoiner.toString(); } This code can return for example: <<One, Two, Tree, Four>> Another way to compose a new String from an iterable is using the Join method on the String class. The Join method supports a seperator, but no prefix and suffix. You can use it as follows: String result = String.join(", ", "One", "Two", "Three"); The result will be: One, Two, ThreeReference: Java 8 StringJoiner from our JCG partner Sjoerd Schunselaar at the JDriven blog....
java-logo

JAXB Tutorial for Java XML Binding – The ULTIMATE Guide

Java offers several options for handling XML structures and files. One of the most common and used ones is JAXB. JAXB stands for Java Architecture for XML Binding. It offers the possibility to convert Java objects into XML structures and the other way around. JAXB comes with the JRE standard bundle since the first versions of the JRE 1.6. The first specification of JAXB was done in March 2003 and the work process is tracked in the Java Specification Request 31: https://jcp.org/en/jsr/detail?id=31. In this specification request you can find a lot of information regarding the long life of JAXB and all the improvements that have been made. As already mentioned, JAXB is included in the JRE bundle since the update 1.6. Before that it was necessary to include their libraries in the specific Java project in order to be able to use it. Before JAXB was available (long time ago), the way Java had to handle XML documents was the DOM: http://www.w3.org/DOM/. This was not a very good approach because there was almost not abstraction from XML nodes into Java objects and all value types were inferred as Strings. JAXB provides several benefits like Object oriented approach related to XML nodes and attributes, typed values, annotations and may others that we are going to explain in this article. All examples in this tutorial have been implementing using the following software versions: JRE 1.8.0 for 32b. The IDE used is Eclipse SDK Version: Luna (4.4). However any other Java versions containing the JAXB API and IDEs should work perfectly fine since all the code is standard Java 8 one.Table Of Contents1. Mapping 2. Marshal 3. Un-marshal 4. Adapters 5. XSDs 6. Annotations 7. Tools 8. Best Practices 9. Summary 10. Resources 11. Download  1. Mapping Java objects can be bonded to XML structures by using certain annotations and following specific rules. This is what we call mapping. In this tutorial we are going to explain the following points, providing examples, resources and extra information:We are going to show some examples about how to convert Java objects into XML structures, this is called marshaling. We will show how to handle primitive types, collections and more complex types using adapters. We will also explain how to do the complementary operation, called un-marshaling, i.e. converting XML files into Java objects. All this is done using Java annotations, we will list and explain the most important annotations used within JAXB. We will also provide an introduction to XSDs (XML Schemas) which are used for validation and are a powerful tool supported by JAXB. We will see how XSDs can be used for marshalling as well. Finally, we will list several tools that can be used in combination with JAXB that help programmers in different ways.2. Marshal In this chapter, we are going to see how to convert Java objects into XML files and what should be taken into consideration while doing this operation. This is commonly called marshaling. First of all we indicate JAXB what java elements from our business model correspond to what XML nodes. @XmlType( propOrder = { "name", "capital", "foundation", "continent" , "population"} ) @XmlRootElement( name = "Country" ) public class Country { @XmlElement (name = "Country_Population") public void setPopulation( int population ) { this.population = population; }@XmlElement( name = "Country_Name" ) public void setName( String name ) { this.name = name; }@XmlElement( name = "Country_Capital" ) public void setCapital( String capital ) { this.capital = capital; } @XmlAttribute( name = "importance", required = true ) public void setImportance( int importance ) { this.importance = importance; } ...The class above contains some JAXB annotations that allow us to indicate what XML nodes we are going to generate. For this purpose we are using the annotations@XmlRootElement as root element. @XmlElement in combination with setter methods. @XmlAttribute to pass attributes to the XML nodes. These attributes can have properties like to be required or not. @XmlType to indicate special options like to order of appearance in the XML.We will explain these annotations and others in more in detail in next chapters. For the moment, I just want to mention them here. Next step is to generate XML files from Java objects. For this purpose we create a simple test program using the JAXBContext and its marshaling functionalities: Country spain = new Country(); spain.setName( "Spain" ); spain.setCapital( "Madrid" ); spain.setContinent( "Europe" ); spain.setImportance( 1 ); spain.setFoundation( LocalDate.of( 1469, 10, 19 ) ); spain.setPopulation( 45000000 );/* init jaxb marshaler */ JAXBContext jaxbContext = JAXBContext.newInstance( Country.class ); Marshaller jaxbMarshaller = jaxbContext.createMarshaller();/* set this flag to true to format the output */ jaxbMarshaller.setProperty( Marshaller.JAXB_FORMATTED_OUTPUT, true );/* marshaling of java objects in xml (output to file and standard output) */ jaxbMarshaller.marshal( spain, new File( "country.xml" ) ); jaxbMarshaller.marshal( spain, System.out );Basically, the most important part here is the use of the class javax.xml.bind.JAXBContext. This class provides a framework for validating, marshaling and un-marshaling XML into (and from) Java objects and it is the entry point to the JAXB API. More information about this class can be found here: https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/JAXBContext.html. In our small example, we are just using this class in order to create a JAXB context that allows us to marshal objects of the type passed as parameter. This is done exactly here:JAXBContext jaxbContext = JAXBContext.newInstance( Country.class ); Marshaller jaxbMarshaller = jaxbContext.createMarshaller();The result of the main program would be: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Country importance="1"> <Country_Name>Spain</Country_Name> <Country_Capital>Madrid</Country_Capital> <Country_Foundation_Date></Country_Foundation_Date> <Country_Continent>Europe</Country_Continent> <Country_Population>45000000</Country_Population> </Country>In the JAXB application shown above we are just converting simple types (Strings and Integers) present in a container class into XML nodes. We can see for example that the date based attributes like the foundation date are missing, we will explain later how to solve this problem for complex types. This looks easy. JAXB supports all kinds of Java objects like other primitive types, collections, date ranges, etc. If we would like to map a list of elements into an XML, we can write: @XmlRootElement( name = "Countries" ) public class Countries { List countries; /** * element that is going to be marshaled in the xml */ @XmlElement( name = "Country" ) public void setCountries( List countries ) { this.countries = countries; }We can see in the snippet above that a new container class is needed in order to indicate JAXB that a class containing a list is present. The result a similar program as the one shown above would be: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Countries> <Country importance="1"> <Country_Name>Spain</Country_Name> <Country_Capital>Madrid</Country_Capital> <Country_Foundation_Date></Country_Foundation_Date> <Country_Continent>Europe</Country_Continent> <Country_Population>0</Country_Population> </Country> <Country importance="0"> <Country_Name>USA</Country_Name> <Country_Capital>Washington</Country_Capital> <Country_Foundation_Date></Country_Foundation_Date> <Country_Continent>America</Country_Continent> <Country_Population>0</Country_Population> </Country> </Countries>There are several options when handling collections:We can use wrapper annotations: The annotation javax.xml.bind.annotation.XMLElementWrapper offers the possibility to create a wrapper around an XML representation. This wrapper can contain a collection of elements. We can use collection based annotations like javax.xml.bind.annotation.XMLElements or javax.xml.bind.annotation.XMLRefs that offer collections functionalities but less flexibility. We can use a container for the collection. A container class (Countries in our case) which has a member of the type java.util.Collection (Country in our case). This is my favorite approach since it offers more flexibility. This is the approach shown before.3. Un-marshal In this chapter, we are going to see how to do the complementary operation: un-marshal XML files into java objects and what should be taken into consideration while doing this operation. First of all we create the XML structure that we want to un-marshal: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Countries> <Country> <Country_Name>Spain</Country_Name> <Country_Capital>Madrid</Country_Capital> <Country_Continent>Europe</Country_Continent> </Country> <Country> <Country_Name>USA</Country_Name> <Country_Capital>Washington</Country_Capital> <Country_Continent>America</Country_Continent> </Country> <Country> <Country_Name>Japan</Country_Name> <Country_Capital>Tokyo</Country_Capital> <Country_Continent>Asia</Country_Continent> </Country> </Countries>Good to mention that we deleted the foundation dates. If these would be present we would get an error since JAXB do not know how to un-marshal them. We will see afterwards how to solve this problem. After that we can create a program that “reads” this XML file and parses it into the proper Java objects: File file = new File( "countries.xml" ); JAXBContext jaxbContext = JAXBContext.newInstance( Countries.class );/** * the only difference with the marshaling operation is here */ Unmarshaller jaxbUnmarshaller = jaxbContext.createUnmarshaller(); Countries countres = (Countries)jaxbUnmarshaller.unmarshal( file ); System.out.println( countres );We can see that the code above does not differ that much from the one shown in the last chapter related to the marshal operation. We also use the class javax.xml.bind.JAXBContext but in this case the method used is the createUnmarshaller() one, which takes care of providing an object of the type javax.xml.bind.Unmarshaller. This object is the responsible of un-marshaling the XML afterwards. This program uses the annotated class Country. It looks like: @XmlType( propOrder = { "name", "capital", "foundation", "continent" , "population"} ) @XmlRootElement( name = "Country" ) public class Country { @XmlElement (name = "Country_Population") public void setPopulation( int population ) { this.population = population; }@XmlElement( name = "Country_Name" ) public void setName( String name ) { this.name = name; }@XmlElement( name = "Country_Capital" ) public void setCapital( String capital ) { this.capital = capital; } @XmlAttribute( name = "importance", required = true ) public void setImportance( int importance ) { this.importance = importance; } ...We cannot appreciate too many differences to the classes used in the last chapter, same annotations are used. This is the output produced by the program in the standard console:Name: Spain Capital: Madrid Europe Name: USA Capital: Washington America Name: Japan Capital: Tokyo AsiaThe same explained for the marshalling operation applies here. It is possible to un-marshal objects or other primitive types like numeric ones; it is also possible to un-marshal collection based elements like lists or sets and the possibilities are the same. As we can see in the results provided above and in the last chapter, the attribute of the type java.time.LocalDate has not been converted. This happens because JAXB does not know how to do it. It is also possible to handle complex types like this one using adapters; we are going to see this in the next chapter. 4. Adapters When handling complex types that may not be directly available in JAXB we need to write an adapter to indicate JAXB how to manage the specific type. We are going to explain this by using an example of marshaling (and un-marshaling) for an element of the type java.time.LocalDate. public class DateAdapter extends XmlAdapter {public LocalDate unmarshal( String date ) throws Exception { return LocalDate.parse( date ); }public String marshal( LocalDate date ) throws Exception { return date.toString(); }}The piece of code above shows the implementation of the marshal and un-marshal methods of the interface javax.xml.bind.annotation.adapters.XmlAdapter with the proper types and results and afterwards, we indicate JAXB where to use it using the annotation @XmlJavaTypeAdapter:... @XmlElement( name = "Country_Foundation_Date" ) @XmlJavaTypeAdapter( DateAdapter.class ) public void setFoundation( LocalDate foundation ) { this.foundation = foundation; } ...The resulting XML for the first program shown in this tutorial would be something like: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Country importance="1"> <Country_Name>Spain</Country_Name> <Country_Capital>Madrid</Country_Capital> <Country_Foundation_Date>1469-10-19</Country_Foundation_Date> <Country_Continent>Europe</Country_Continent> <Country_Population>45000000</Country_Population> </Country>This can be applied to any other complex type not directly supported by JAXB that we would like to have in our XML structures. We just need to implement the interface javax.xml.bind.annotation.adapters.XmlAdapter and implement their methods javax.xml.bind.annotation.adapters.XmlAdapter.unmarshal(ValueType) and javax.xml.bind.annotation.adapters.XmlAdapter.marshal(BoundType). 5. XSDs XSD is an XML schema. It contains information about XML files and structures with rules and constraints that should be followed. These rules can apply to the structure of the XML and also to the content. Rules can be concatenated and very complex rules can be created using all kind of structures, in this article we are going to show the main concepts about how to use XSDs for validation and marshaling purposes. Here is an example of an XML Schema that can be used for the class Country used in this tutorial:<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="Country"> <xs:complexType> <xs:sequence> <xs:element name="Country_Name" type="xs:string" /> <xs:element name="Country_Capital" type="xs:string" /> <xs:element name="Country_Foundation_Date" type="xs:string" /> <xs:element name="Country_Continent" type="xs:string" /> <xs:element name="Country_Population" type="xs:integer" /> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>5.1. Validation using XSDs XSDs can be used for XML validation. JAXB uses XSDs for validating XML and to assure that the objects and XML structures follow the set of expected rules. In order to validate an object against an XSD we need to create the XSD first, as an example:<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:simpleType name="continentType"> <xs:restriction base="xs:string"> <xs:pattern value="Asia|Europe|America|Africa|Oceania"/> </xs:restriction> </xs:simpleType>   <xs:element name="Country"> <xs:complexType> <xs:sequence> <xs:element name="Country_Name" type="xs:string" minOccurs='1' /> <xs:element name="Country_Capital" type="xs:string" minOccurs='1' /> <xs:element name="Country_Foundation_Date" type="xs:string" minOccurs='1' /> <xs:element name="Country_Continent" type="continentType" minOccurs='1' /> <xs:element name="Country_Population" type="xs:integer" /> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>  We can use this in our program my indicating JAXB what XSD we want to use in our code. We can do this by creating an instance of the class javax.xml.validation.Schema and initializing its validation in the following way:/** * schema is created */ SchemaFactory sf = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI ); Schema schema = sf.newSchema( new File( "countries_validation.xsd" ) );The validation expects an error handler that can take care of errors. This is done by implementing the interface org.xml.sax.ErrorHandler and its error methods: public class MyErrorHandler implements ErrorHandler {@Override public void warning( SAXParseException exception ) throws SAXException { throw exception;} ...After that, the validation can be used to validate instances of the class javax.xml.bind.util.JAXBSource: /** * context is created and used to create sources for each country */ JAXBContext jaxbContext = JAXBContext.newInstance( Country.class ); JAXBSource sourceSpain = new JAXBSource( jaxbContext, spain ); JAXBSource sourceAustralia = new JAXBSource( jaxbContext, australia );Here is the complete program: /** * error will be thrown because continent is mandatory */ Country spain = new Country(); spain.setName( "Spain" ); spain.setCapital( "Madrid" ); spain.setFoundation( LocalDate.of( 1469, 10, 19 ) ); spain.setImportance( 1 );/** * ok */ Country australia = new Country(); australia.setName( "Australia" ); australia.setCapital( "Camberra" ); australia.setFoundation( LocalDate.of( 1788, 01, 26 ) ); australia.setContinent( "Oceania" ); australia.setImportance( 1 );/** * schema is created */ SchemaFactory sf = SchemaFactory.newInstance( XMLConstants.W3C_XML_SCHEMA_NS_URI ); Schema schema = sf.newSchema( new File( "countries_validation.xsd" ) );/** * context is created and used to create sources for each country */ JAXBContext jaxbContext = JAXBContext.newInstance( Country.class ); JAXBSource sourceSpain = new JAXBSource( jaxbContext, spain ); JAXBSource sourceAustralia = new JAXBSource( jaxbContext, australia );/** * validator is initialized */ Validator validator = schema.newValidator(); validator.setErrorHandler( new MyErrorHandler() );//validator is used try { validator.validate( sourceSpain ); System.out.println( "spain has no problems" ); } catch( SAXException ex ) { ex.printStackTrace(); System.out.println( "spain has problems" ); } try { validator.validate( sourceAustralia ); System.out.println( "australia has no problems" ); } catch( SAXException ex ) { ex.printStackTrace(); System.out.println( "australia has problems" ); }and its output: org.xml.sax.SAXParseException at javax.xml.bind.util.JAXBSource$1.parse(JAXBSource.java:248) at javax.xml.bind.util.JAXBSource$1.parse(JAXBSource.java:232) at com.sun.org.apache.xerces.internal.jaxp.validation.ValidatorHandlerImpl.validate( ... spain has problems australia has no problemsWe can see that Australia has no problems but Spain does… 5.2. Marshaling using XSDs XSDs are used also for binding and generating java classes from XML files and vice versa. We are going to see here how to use it with a marshaling example. Using the same XML schema shown before, we are going to wriite a program that initializes a javax.xml.validation.Schema using the given XSD and a javax.xml.bind.JAXBContext for the classes that we want to marshal (Country). This program will use a javax.xml.bind.Marshaller in order to perform the needed operations:/** * validation will fail because continent is mandatory */ Country spain = new Country(); spain.setName( "Spain" ); spain.setCapital( "Madrid" ); spain.setFoundation( LocalDate.of( 1469, 10, 19 ) );SchemaFactory sf = SchemaFactory.newInstance( XMLConstants.W3C_XML_SCHEMA_NS_URI ); Schema schema = sf.newSchema( new File( "countries_validation.xsd" ) );JAXBContext jaxbContext = JAXBContext.newInstance( Country.class );Marshaller marshaller = jaxbContext.createMarshaller(); marshaller.setProperty( Marshaller.JAXB_FORMATTED_OUTPUT, true ); marshaller.setSchema( schema ); //the schema uses a validation handler for validate the objects marshaller.setEventHandler( new MyValidationEventHandler() ); try { marshaller.marshal( spain, System.out ); } catch( JAXBException ex ) { ex.printStackTrace(); }/** * continent is wrong and validation will fail */ Country australia = new Country(); australia.setName( "Australia" ); australia.setCapital( "Camberra" ); australia.setFoundation( LocalDate.of( 1788, 01, 26 ) ); australia.setContinent( "Australia" );try { marshaller.marshal( australia, System.out ); } catch( JAXBException ex ) { ex.printStackTrace(); }/** * finally everything ok */ australia = new Country(); australia.setName( "Australia" ); australia.setCapital( "Camberra" ); australia.setFoundation( LocalDate.of( 1788, 01, 26 ) ); australia.setContinent( "Oceania" );try { marshaller.marshal( australia, System.out ); } catch( JAXBException ex ) { ex.printStackTrace(); }We were talking before about validation related to XSDs. It is also possible to validate while marshaling objects to XML. If our object does not comply with some of the expected rules, we are going to get some validation errors here as well although we are not validating explicitly. In this case we are not using an error handler implementing org.xml.sax.ErrorHandler but a javax.xml.bind.ValidationEventHandler: public class MyValidationEventHandler implements ValidationEventHandler {@Override public boolean handleEvent( ValidationEvent event ) { System.out.println( "Error catched!!" ); System.out.println( "event.getSeverity(): " + event.getSeverity() ); System.out.println( "event: " + event.getMessage() ); System.out.println( "event.getLinkedException(): " + event.getLinkedException() ); //the locator contains much more information like line, column, etc. System.out.println( "event.getLocator().getObject(): " + event.getLocator().getObject() ); return false; }}We can see that the method javax.xml.bind.ValidationEventHandler.handleEvent(ValidationEvent) has been implemented. And the output will be: javax.xml.bind.MarshalException - with linked exception: [org.xml.sax.SAXParseException; lineNumber: 0; columnNumber: 0; cvc-pattern-valid: Value 'Australia' is not facet-valid with respect to pattern 'Asia|Europe|America|Africa|Oceania' for type 'continentType'.] at com.sun.xml.internal.bind.v2.runtime.MarshallerImpl.write(MarshallerImpl.java:311) at ... <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Country> <Country_Name>Australia</Country_Name> <Country_Capital>Camberra</Country_Capital> <Country_Foundation_Date>1788-01-26</Country_Foundation_Date> <Country_Continent>Oceania</Country_Continent> <Country_Population>0</Country_Population> </Country>Obviously there is much more to explain about XSDs because of all the possible rules constellations and its applications are huge. But this is out of the scope of this tutorial. If you need more information about how to use XML Schemas within JAXB you can visit the following links, I found them really interesting:http://www.w3schools.com/schema/schema_example.asp. http://blog.bdoughan.com/2010/12/jaxb-and-marshalunmarshal-schema.html .6. Annotations In this tutorial we have been using several annotations used within JAXB for and XML marshaling and un-marshaling operations. We are going to list here the most important ones:XmlAccessorOrder: This annotation controls the ordering of fields and properties in a class in which they will appear in the XML. For more information: https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlAccessorOrder.html XmlAccessorType: Indicates if an element should be serialized or not. It is used in combination with javax.xml.bind.annotation .XMLAccessType. For more information: https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlAccessorType.html XmlAnyAttribute: Maps an element to a map of wildcard attributes. For more information http://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlAnyAttribute.html . XmlAnyElement: Serves as a fallback for un-marshalling operations when no mapping is predefined. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlAnyElement.html XmlAttribute: This annotation is one of the basic and most used ones. It maps a Java element (property, attribute, field) to an XML node attribute. In this tutorial has been used in several examples. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlAttribute.html XmlElement: Maps a Java element to an XML node using the name. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlElement.html XmlElementRef: Maps a java element to an XML node using its type (different to the last one, where the name is used for mapping). For more information about this annotation https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlElementRef.html XmlElementRefs: Marks a property that refers to classes with XmlElement or JAXBElement. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlElementRefs.html XmlElements: This is a container for multiple XMLElement annotations. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlElements.html XmlElementWrapper: It generates a wrapper around an XML representation, intended to be used with collections, in this tutorial we saw there are different ways to handle collections. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlElementWrapper.html XmlEnum: Provides mapping for an enum to an XML representation. It works in combination with XmlEnumValue. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlEnum.html XmlEnumValue: Maps an enum constant to an XML element. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlEnumValue.html XmlID: Maps a property to an XML id. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlID.html XmlList: Another way of handling lists within JAXB. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlList.html XmlMimeType: Controls the representation of the annotated property. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlMimeType.html XmlMixed: The annotated element can contain mixed content. The content can be text, children content or unknown. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlMixed.html XmlRootElement: This is probably the most used annotation within JAXB. It is used to map a class to an XML element. It is a kind of entry point for every JAXB representation. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlRootElement.html XmlSchema: Maps a package to an XML namespace. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlSchema.html XmlSchemaType: Maps a Java type to a simple schema built-in type. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlSchemaType.html XmlSeeAlso: Indicates JAXB to bind other classes when binding the annotated one. This is needed because Java makes very difficult to list all subclasses for a given class, using this mechanism you can indicate JAXB what subclasses (or other classes) should be bounded when handling a specific one. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlSeeAlso.html XmlType: Used for mapping a class or an enum to a type in an XML Schema. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlType.html XmlValue: Enables mapping a class to a XML Schema complex type with a simpleContent or a XML Schema simple type. For more information https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/XmlValue.htmlThis is a huge list, but these are not all the available JAXB related annotations. For a complete list with all the JAXB available annotations please go to the package summary https://docs.oracle.com/javase/7/docs/api/javax/xml/bind/annotation/package-frame.html. 7. Tools There are many tools that can help programmers while working with JAXB and XML Schemas. We are going to list in this chapter some of them and provide some useful resources:schemagen: Stands for Schema Generator. It is used for generating JAXB Schemas from Java annotated classes or sources (it also works with byte codes). Can be used directly from the command line or using Ant. For more information visit the Oracle page https://docs.oracle.com/javase/7/docs/technotes/tools/share/schemagen.html or https://jaxb.java.net/2.2.4/docs/schemagen.html. It is available with the JRE standard bundle and can be launched from the bin directory. xjc: This is the JAXB Binding Compiler. It is used for generate Java sources (classes) from XML Schemas (XSDs). It can be used with Ant. Performs a validation of the XSD before its execution. For more information about its usage and parameters you should visit https://jaxb.java.net/2.2.4/docs/xjc.html . Jsonix: Nothing to do with JAXB directly but it is a tool that convert XML to JSON and the other way around. It is very useful when working on JavaScript. Here is the official link http://confluence.highsource.org/display/JSNX/Jsonix . Hyperjaxb3: It provides relation persistence for JAXB objects. It uses the JAXB annotations to persist the objects in relational databases. It works with Hibernate, probably the most used Java persistence framework. You can find more documentation and sources in the link http://confluence.highsource.org/display/HJ3/Home. Maven JAXB2 Plugin: Plugin for Maven that offers all the functionalities from the xjc tool and many more while converting XML Schemas into Java classes. In the following link you can download the sources and consult the documentation https://github.com/highsource/maven-jaxb2-plugin . JAXB Eclipse Plugin: There are several Eclipse plugins available (as well for other IDEs like NetBeans or IntelliJ) that allow to compile XML Schemas into Java classes, validate XML schemas or generate Schemas. All of them wrap and use one of the tools mentioned above. Since there is no plugin that can be described as the standard one I leave you to search in the web and pick the one of your choice.These are not all the tools that work with JAXB and XSDs for converting XML structures into Java classes. There are many more. I just listed here the most important and basic ones that every developer should take into consideration when using JAXB in their applications. 8. Best Practices Although this is not a complete list of best practices and it is a subjective topic I would like to give a couple of preferred usages related to JAXB.Try to avoid problematic primitive types like float, decimal or negativeInteger; these do not have a related type in JAXB. In these cases you can probably use other “non problematic” primitive types. Use the annotation @XMLSchemaType to be more explicit regarding the type that has to be used and do not leave JAXB to decide this point. Avoid anonymous types and mixed content. JAXB uses the systems default locale and country for generating information and error messages In order to change that you can pass to your application the following JVM parameters:-Duser.country=US -Duser.language=en -Duser.variant=Traditional_WIN9. Summary So that is all. In this tutorial we explained what JAXB is and what it can be used for. JAXB stands for Java Architecture for XML binding and it is a framework used basically for:Marshaling Java objects into XML structures. Un-marshaling XML files into Java classes. These operations can be done using XSDs (XML schemas). XSDs are very useful for validation purposes as well.In plain words, JAXB can be used for converting XMLs into Java classes and vice versa. We explained the main concepts related to all these and we listed the main annotations and classes involved in the JAXB usage. There are several tools that can be used in combination with JAXB for different purposes, we explained how to use and what can be done with some of them. 10. Resources This tutorial contains deep information about JAXB and its usage for marshaling and un-marshaling XML into Java objects. It also explains how to use XSDs when working with JAXB and some best practices and tricks. But in case you need to go one step forward in your applications programming with JAXB it may be useful to visit following resources:http://examples.javacodegeeks.com/core-java/xml/bind/jaxb-unmarshal-example/ http://examples.javacodegeeks.com/core-java/xml/bind/jaxb-unmarshal-example/ http://www.oracle.com/technetwork/articles/javase/index-140168.html http://en.wikipedia.org/wiki/Java_Architecture_for_XML_Binding https://jaxb.java.net/ http://en.wikipedia.org/wiki/XML_schema11. Download This was a tutorial on Java XML binding with JAXB. Download You can download the full source code of this tutorial here: jaxb_ultimate. ...
git-logo

jOOQ Tuesdays: Yalim Gerger brings Git to Oracle

We’re excited to launch a new series on our blog: the jOOQ Tuesdays. In this series, we’ll publish an article on the third Tuesday every month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics. We have the pleasure of starting this series with Yalım Gerger who will show us why he thinks that Oracle PL/SQL developers are more than ready for Git!     Hi Yalım – you’re the founder of Gerger, the company behind Formspider. What’s Formspider? Hi. Formspider is an application development framework for Oracle PL/SQL developers. It enables PL/SQL developers to build high quality business applications using only PL/SQL as the programming language. No Java or JavaScript skills are needed to use Formspider. Interesting, even from a Java developer’s perspective! Essentially, you’re offering a way to completely bypass Java as middleware (and of course HTML, JavaScript, CSS). This is not entirely true. We still use Java in our product. Formspider has a middletier application that our customers can deploy to any JEE compliant application server. This middletier application helps us bridge Formspider JavaScript library running in the client’s browser to the PL/SQL running in the database. We also use Java Libraries to generate Excel files from the data stored in an application, a common use case for business applications. So yes, the applications are not coded in Java. Our customers are PL/SQL developers. But we use Java to improve our product. Same with HTML and Javascript. Our job is to understand these technologies and their capabilities really well and expose them as intuitive API’s to PL/SQL developers. Do you also have customers that access their PL/SQL APIs both through Formspider as well as through their home-grown Java / .NET applications? Yes. We have customers who have both PL/SQL teams that work with Formspider and Java teams that work with Java technologies. This requires a great deal of collaboration between two teams and that’s not always possible. Usually, what happens is that Java/.NET teams try to move the application away from the database as much as possible. I was just talking to a friend who works at a large financial institution in which the OO guys are pushing hard to eliminate all the PL/SQL API’s. He was going mad. There are various reasons for this. It is partly political turf wars, partly pure ignorance about the capabilities of database software and PL/SQL. We can feel the pain. There is no such thing as a magic bullet… So what should they do then? How should an application be architected? Do you think there is a “right” architecture? No. I think it depends so much on the context. Are you building a consumer app or are you building a business app? Are you a company building a horizontal product or an IT department serving a business operating in a particular vertical? There are so many parameters to consider. At the risk of being too generic, I think an IT department serving a large enterprise should not try build a database agnostic application. That’s silly. On the contrary it should take full advantage of the database software, and other software it uses. You shouldn’t pretend to be seven different organizations building seven different layers of the application just happen to be collaborating. You are just one organization. Act like that. Cut through the fat. Integrate as deeply as you can. This is the most cost effective way to build well performing applications on time and on budget. Database agnostic software is for horizontal software companies. We’ve recently blogged about the caveats of dual licensing, where we said that shipping our sources to paying customers is essential for a company that calls themselves an Open Source company like Data Geekery does. I’ve seen you ship your sources as well – but you’re not really doing “Open Source”. How would you describe your offering? I loved that blog post by the way. I think the way jOOQ is licensed is brilliantly fair i.e. it is free if the database is free and it has a price tag if your database has a license fee. In our case, the database always has a license fee. So we don’t have a free option for Formspider. For the Oracle community and for the price tags that they are used to, our license fee is so small that it is practically free. Anyone who is thrown off by our price tag is probably not serious about using Formspider anyway. Our customers who sign up for our highest level of support service may get the source code of our product for the duration of the professional service. This option is attractive for customers who invest a lot into the application they build with Formspider. Yes, Oracle price tags have a reputation… Yalım, you seem to be an Oracle person. And as such, you are about to launch gitora. What is it? Gitora is a free version control tool that integrates the Oracle database with Git. This is a little embarrassing to bring up in a blog mainly read by Java developers but very common version control tasks that most Java developers take for granted are very hard to do in PL/SQL. There is a good reason for this. PL/SQL has no concept of a working directory. PL/SQL is not a file based language i.e. source code units do not reside in the private file system of a developer but in the Oracle database as packages, procedures and functions globally  available to any developer. That makes version control very difficult if not impossible. So what do people do? Nothing mostly. Daily backups are used as a way to get back to a previous state of the code if needed. Some teams create one working directory that is hooked to version control and store all their PL/SQL code in this directory by extracting the DDL’s, usually manually. That’s as sophisticated as it gets. Proper team development and merging in PL/SQL is very difficult and I haven’t seen it done successfully very often. And I’ve interacted with a lot of PL/SQL teams all around the World. Gitora makes this very easy. It turns the database schema to a working directory. If you execute a Git command, any change to the working directory happens automatically in your database schema. Interesting. We’ve recently implemented a home-grown “solution” for a customer, which implements automatic version control and installation from a Microsoft Team Foundation Server repository. Maybe, we should migrate to Gitora then? I didn’t know that. That’s so cool. If you build a version control tool which works for PL/SQL and talks to TFS instead of Git, I think that is also very valuable. Essentially we build the same product but used different version control products. I encourage you to put it out there. Why not. Maybe we’ll contribute! Thanks for this very interesting insight, Yalım! If this interview has triggered your interest, follow Yalım, FormSpider, or Gitora on Twitter:https://twitter.com/yalimgerger https://twitter.com/formspider https://twitter.com/gitoraforplsqlFor more information about Gitora, watch the Gitora tutorial:Reference: jOOQ Tuesdays: Yalim Gerger brings Git to Oracle from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
java-logo

Managing Package Dependencies with Degraph

A large part of the art of software development is keeping the complexity of a system as low as possible. But what is complexity anyway? While the exact semantics vary quite a bit, depending on who you ask, probably most agree that it has a lot to do with the number of parts in a system and their interactions. Consider a marble in space, i.e a planet, moon or star. Without any interaction this is as boring as a system can get. Nothing happens. If the marble moves, it keeps moving in exactly the same way. To be honest there isn’t even a way to determine if it is moving. Boooring. Add a second marble to the system and let them attract each other, like earth and moon. Now the system is a more interesting. The two objects circle each other if they aren’t too fast. Somewhat interesting. Now add a third object. In the general case things go so interesting that we can’t even predict what is going to happen. The whole system didn’t just became complex it became chaotic. You now have a three body problem In the general case this problem cannot be solved, i.e. we cannot predict what will happen with the system. But there are some special cases. Especially the case where two of the objects a very close to each other like earth and moon and the third one is so far away that the two first object behave just like one. In this case you approximate the system with two particle systems. But what has this to do with Java? This sounds more like physics. I think software development is similar in some aspects. A complete application is way to complicated to be understood as a whole. To fight this complexity we divide the system into parts (classes) that can be understood on their own, and that hide their inner complexity so that when we look at the larger picture we don’t have to worry about every single code line in a class, but only about the class as one entity. This is actually very similar to what physicists do with systems. But let’s look at the scale of things. The basic building block of software is the code line. And to keep the complexity in check we bundle code lines that work together in methods. How many code lines go into a single method varies, but it is in the order of 10 lines of code. Next you gather methods into classes. How many methods go into a single class? Typically in the order of 10 methods! And then? We bundle 100-10000 classes in a single jar! I hope I’m not the only one who thinks something is amiss. I’m not sure what comes out of project jigsaw, but currently Java only offers packages as a way to bundle classes. Package aren’t a powerful abstraction, yet it is the only one we have, so we better use it. Most teams do use packages, but not in a very well structured, but ad hoc way. The result is similar to trying to consider moon and sun as on part of the system, and the earth as the other part. The result might work, but it is probably as intuitive as Ptolemy’s planetary model. Instead decide on criteria how you want to differentiate your packages. I personally call them slicings, inspired by an article by Oliver Gierke. Possible slicings in order of importance are:the deployable jar file the class should end up in the use case / feature / part of the business model the class belongs to the technical layer the class belongs toThe packages this results in will look like this: <domain>.<deployable>.<domain part>.<layer> It should be easy to decide where a class goes. And it should also keep the packages at a reasonable size, even when you don’t use the separation by technical layer. But what do you gain from this? It is easier to find classes, but that’s about it. You need one more rule to make this really worth while: There must be no cyclic dependencies! This means, if a class in a package A references a class in package B no class in B may reference A. This also applies if the reference is indirect via multiple other packages. But that is still not enough. Slices should be cycle free as well, so if a domain part X references a different domain part Y, the reverse dependency must not exist! This will in deed put some rather strict rules on your package and dependency structure. The benefit of this is, that it becomes very flexible. Without such a structure splitting your project in multiple parts will probably be rather difficult. Ever tried to reuse part of an application in a different one, just to realize that you basically have to include most of the the application in order to get it to compile? Ever tried to deploy different parts of an application to different servers, just to realize you can’t? It certainly happend to me before I used the approach mentioned above. But with this more strict structure, the parts you may want to reuse, will almost on their own end up on the end of the dependency chain so you can take them and bundle them in their own jar, or just copy the code in a different project and have it compile in very short time. Also while trying to keep your packages and slices cycle free you’ll be forced to think hard, what each package involved is really about. Something that improved my code base considerably in many cases. So there is one problem left: Dependencies are hard to see. Without a tool, it is very difficult to keep a code base cycle free. Of course there are plenty of tools that check for cycles, but cleaning up these cycles is tough and the way most tools present these cycles doesn’t help very much. I think what one needs are two things:a simple test, that can run with all your other tests and fails when you create a dependency circle. a tool that visualizes all the dependencies between classes, while at the same time showing in which slice each class belongs.Surprise! I can recommend such a great tool: Degraph! (I’m the author, so I might be biased) You can write tests in JUnit like this: assertThat( classpath().including("de.schauderhaft.**") .printTo("degraphTestResult.graphml") .withSlicing("module", "de.schauderhaft.(*).*.**") .withSlicing("layer", "de.schauderhaft.*.(*).**"), is(violationFree()) ); The test will analyze everything in the classpath that starts with de.schauderhaft. It will slice the classes in two ways: By taking the third part of the package name and by taking the forth part of the package name. So a class name de.schauderhaft.customer.persistence.HibernateCustomerRepository ends up in the module customer and in the layer persistence. And it will make sure that modules, layers and packages are cycle free. And if it finds a dependency circle, it will create a graphml file, which you can open using the free graph editor yed. With a little layouting you get results like the following where the dependencies that result in circular dependencies are marked in red.Again for more details on how to achieve good usable layouts I have to refer to the documentation of Degraph. Also note that the graphs are colored mainly green with a little red, which nicely fits the season!Reference: Managing Package Dependencies with Degraph from our JCG partner Jens Schauder at the Java Advent Calendar blog....
software-development-2-logo

R: Time to/from the weekend

In my last post I showed some examples using R’s lubridate package and another problem it made really easy to solve was working out how close a particular date time was to the weekend. I wanted to write a function which would return the previous Sunday or upcoming Saturday depending on which was closer. lubridate’s floor_date and ceiling_date functions make this quite simple. e.g. if we want to round the 18th December down to the beginning of the week and up to the beginning of the next week we could do the following: > library(lubridate) > floor_date(ymd("2014-12-18"), "week") [1] "2014-12-14 UTC"   > ceiling_date(ymd("2014-12-18"), "week") [1] "2014-12-21 UTC" For the date in the future we actually want to grab the Saturday rather than the Sunday so we’ll subtract one day from that: > ceiling_date(ymd("2014-12-18"), "week") - days(1) [1] "2014-12-20 UTC" Now let’s put that together into a function which finds the closest weekend for a given date: findClosestWeekendDay = function(dateToLookup) { before = floor_date(dateToLookup, "week") + hours(23) + minutes(59) + seconds(59) after = ceiling_date(dateToLookup, "week") - days(1) if((dateToLookup - before) < (after - dateToLookup)) { before } else { after } }   > findClosestWeekendDay(ymd_hms("2014-12-13 13:33:29")) [1] "2014-12-13 UTC"   > findClosestWeekendDay(ymd_hms("2014-12-14 18:33:29")) [1] "2014-12-14 23:59:59 UTC"   > findClosestWeekendDay(ymd_hms("2014-12-15 18:33:29")) [1] "2014-12-14 23:59:59 UTC"   > findClosestWeekendDay(ymd_hms("2014-12-17 11:33:29")) [1] "2014-12-14 23:59:59 UTC"   > findClosestWeekendDay(ymd_hms("2014-12-17 13:33:29")) [1] "2014-12-20 UTC"   > findClosestWeekendDay(ymd_hms("2014-12-19 13:33:29")) [1] "2014-12-20 UTC" I’ve set the Sunday date at 23:59:59 so that I can use this date in the next step where we want to calculate how how many hours it is from the current date to the nearest weekend. I ended up with this function: distanceFromWeekend = function(dateToLookup) { before = floor_date(dateToLookup, "week") + hours(23) + minutes(59) + seconds(59) after = ceiling_date(dateToLookup, "week") - days(1) timeToBefore = dateToLookup - before timeToAfter = after - dateToLookup   if(timeToBefore < 0 || timeToAfter < 0) { 0 } else { if(timeToBefore < timeToAfter) { timeToBefore / dhours(1) } else { timeToAfter / dhours(1) } } }   > distanceFromWeekend(ymd_hms("2014-12-13 13:33:29")) [1] 0   > distanceFromWeekend(ymd_hms("2014-12-14 18:33:29")) [1] 0   > distanceFromWeekend(ymd_hms("2014-12-15 18:33:29")) [1] 18.55833   > distanceFromWeekend(ymd_hms("2014-12-17 11:33:29")) [1] 59.55833   > distanceFromWeekend(ymd_hms("2014-12-17 13:33:29")) [1] 58.44194   > distanceFromWeekend(ymd_hms("2014-12-19 13:33:29")) [1] 10.44194 While this works it’s quite slow when you run it over a data frame which contains a lot of rows. There must be a clever R way of doing the same thing (perhaps using matrices) which I haven’t figured out yet so if you know how to speed it up do let me know.Reference: R: Time to/from the weekend from our JCG partner Mark Needham at the Mark Needham Blog blog....
software-development-2-logo

R: Cleaning up and plotting Google Trends data

I recently came across an excellent article written by Stian Haklev in which he describes things he wishes he’d been told before starting out with R, one being to do all data clean up in code which I thought I’d give a try.                 My goal is to leave the raw data completely unchanged, and do all the transformation in code, which can be rerun at any time. While I’m writing the scripts, I’m often jumping around, selectively executing individual lines or code blocks, running commands to inspect the data in the REPL (read-evaluate-print-loop, where each command is executed as soon as you type enter, in the picture above it’s the pane to the right), etc. But I try to make sure that when I finish up, the script is runnable by itself. I thought the Google Trends data set would be an interesting one to play around with as it gives you a CSV containing several different bits of data of which I’m only interested in ‘interest over time’. It’s not very easy to automate the download of the CSV file so I did that bit manually and automated everything from there onwards. The first step was to read the CSV file and explore some of the rows to see what it contained: > library(dplyr)   > googleTrends = read.csv("/Users/markneedham/Downloads/report.csv", row.names=NULL)   > googleTrends %>% head() ## row.names Web.Search.interest..neo4j ## 1 Worldwide; 2004 - present ## 2 Interest over time ## 3 Week neo4j ## 4 2004-01-04 - 2004-01-10 0 ## 5 2004-01-11 - 2004-01-17 0 ## 6 2004-01-18 - 2004-01-24 0   > googleTrends %>% sample_n(10) ## row.names Web.Search.interest..neo4j ## 109 2006-01-08 - 2006-01-14 0 ## 113 2006-02-05 - 2006-02-11 0 ## 267 2009-01-18 - 2009-01-24 0 ## 199 2007-09-30 - 2007-10-06 0 ## 522 2013-12-08 - 2013-12-14 88 ## 265 2009-01-04 - 2009-01-10 0 ## 285 2009-05-24 - 2009-05-30 0 ## 318 2010-01-10 - 2010-01-16 0 ## 495 2013-06-02 - 2013-06-08 79 ## 28 2004-06-20 - 2004-06-26 0   > googleTrends %>% tail() ## row.names Web.Search.interest..neo4j ## 658 neo4j example Breakout ## 659 neo4j graph database Breakout ## 660 neo4j java Breakout ## 661 neo4j node Breakout ## 662 neo4j rest Breakout ## 663 neo4j tutorial Breakout We only want to keep the rows which contain (week, interest) pairs so the first thing we’ll do is rename the columns: names(googleTrends) = c("week", "score") Now we want to strip out the rows which don’t contain (week, interest) pairs. The easiest way to do this is to look for rows which don’t contain date values in the ‘week’ column. First we need to split the start and end dates in that column by using the strsplit function. I found it much easier to apply the function to each row individually rather than passing in a list of values so I created a dummy column with a row number in to allow me to do that (a trick Antonios showed me): > googleTrends %>% mutate(ind = row_number()) %>% group_by(ind) %>% mutate(dates = strsplit(week, " - "), start = dates[[1]][1] %>% strptime("%Y-%m-%d") %>% as.character(), end = dates[[1]][2] %>% strptime("%Y-%m-%d") %>% as.character()) %>% head() ## Source: local data frame [6 x 6] ## Groups: ind ## ## week score ind dates start end ## 1 Worldwide; 2004 - present 1 1 <chr[2]> NA NA ## 2 Interest over time 1 2 <chr[1]> NA NA ## 3 Week 90 3 <chr[1]> NA NA ## 4 2004-01-04 - 2004-01-10 3 4 <chr[2]> 2004-01-04 2004-01-10 ## 5 2004-01-11 - 2004-01-17 3 5 <chr[2]> 2004-01-11 2004-01-17 ## 6 2004-01-18 - 2004-01-24 3 6 <chr[2]> 2004-01-18 2004-01-24 Now we need to get rid of the rows which have an NA value for ‘start’ or ‘end': > googleTrends %>% mutate(ind = row_number()) %>% group_by(ind) %>% mutate(dates = strsplit(week, " - "), start = dates[[1]][1] %>% strptime("%Y-%m-%d") %>% as.character(), end = dates[[1]][2] %>% strptime("%Y-%m-%d") %>% as.character()) %>% filter(!is.na(start) | !is.na(end)) %>% head() ## Source: local data frame [6 x 6] ## Groups: ind ## ## week score ind dates start end ## 1 2004-01-04 - 2004-01-10 3 4 <chr[2]> 2004-01-04 2004-01-10 ## 2 2004-01-11 - 2004-01-17 3 5 <chr[2]> 2004-01-11 2004-01-17 ## 3 2004-01-18 - 2004-01-24 3 6 <chr[2]> 2004-01-18 2004-01-24 ## 4 2004-01-25 - 2004-01-31 3 7 <chr[2]> 2004-01-25 2004-01-31 ## 5 2004-02-01 - 2004-02-07 3 8 <chr[2]> 2004-02-01 2004-02-07 ## 6 2004-02-08 - 2004-02-14 3 9 <chr[2]> 2004-02-08 2004-02-14 Next we’ll get rid of ‘week’, ‘ind’ and ‘dates’ as we aren’t going to need those anymore: > cleanGoogleTrends = googleTrends %>% mutate(ind = row_number()) %>% group_by(ind) %>% mutate(dates = strsplit(week, " - "), start = dates[[1]][1] %>% strptime("%Y-%m-%d") %>% as.character(), end = dates[[1]][2] %>% strptime("%Y-%m-%d") %>% as.character()) %>% filter(!is.na(start) | !is.na(end)) %>% ungroup() %>% select(-c(ind, dates, week))   > cleanGoogleTrends %>% head() ## Source: local data frame [6 x 3] ## ## score start end ## 1 3 2004-01-04 2004-01-10 ## 2 3 2004-01-11 2004-01-17 ## 3 3 2004-01-18 2004-01-24 ## 4 3 2004-01-25 2004-01-31 ## 5 3 2004-02-01 2004-02-07 ## 6 3 2004-02-08 2004-02-14   > cleanGoogleTrends %>% sample_n(10) ## Source: local data frame [10 x 3] ## ## score start end ## 1 8 2010-09-26 2010-10-02 ## 2 73 2013-11-17 2013-11-23 ## 3 52 2012-07-01 2012-07-07 ## 4 3 2005-06-19 2005-06-25 ## 5 3 2004-12-12 2004-12-18 ## 6 3 2009-09-06 2009-09-12 ## 7 71 2014-09-14 2014-09-20 ## 8 3 2004-12-26 2005-01-01 ## 9 62 2013-03-03 2013-03-09 ## 10 3 2006-03-19 2006-03-25   > cleanGoogleTrends %>% tail() ## Source: local data frame [6 x 3] ## ## score start end ## 1 80 2014-10-19 2014-10-25 ## 2 80 2014-10-26 2014-11-01 ## 3 84 2014-11-02 2014-11-08 ## 4 81 2014-11-09 2014-11-15 ## 5 83 2014-11-16 2014-11-22 ## 6 2 2014-11-23 2014-11-29 Ok now we’re ready to plot. This was my first attempt: > library(ggplot2) > ggplot(aes(x = start, y = score), data = cleanGoogleTrends) + geom_line(size = 0.5) ## geom_path: Each group consist of only one observation. Do you need to adjust the group aesthetic?As you can see, not too successful! The first mistake I’ve made is not telling ggplot that the ‘start’ column is a date and so it can use that ordering when plotting: > cleanGoogleTrends = cleanGoogleTrends %>% mutate(start = as.Date(start)) > ggplot(aes(x = start, y = score), data = cleanGoogleTrends) + geom_line(size = 0.5)My next mistake is that ‘score’ is not being treated as a continuous variable and so we’re ending up with this very strange looking chart. We can see that if we call the class function: > class(cleanGoogleTrends$score) ## [1] "factor" Let’s fix that and plot again: > cleanGoogleTrends = cleanGoogleTrends %>% mutate(score = as.numeric(score)) > ggplot(aes(x = start, y = score), data = cleanGoogleTrends) + geom_line(size = 0.5)That’s much better but there is quite a bit of noise in the week to week scores which we can flatten a bit by plotting a rolling mean of the last 4 weeks instead: > library(zoo) > cleanGoogleTrends = cleanGoogleTrends %>% mutate(rolling = rollmean(score, 4, fill = NA, align=c("right")), start = as.Date(start))   > ggplot(aes(x = start, y = rolling), data = cleanGoogleTrends) + geom_line(size = 0.5)Here’s the full code if you want to reproduce: library(dplyr) library(zoo) library(ggplot2)   googleTrends = read.csv("/Users/markneedham/Downloads/report.csv", row.names=NULL) names(googleTrends) = c("week", "score")   cleanGoogleTrends = googleTrends %>% mutate(ind = row_number()) %>% group_by(ind) %>% mutate(dates = strsplit(week, " - "), start = dates[[1]][1] %>% strptime("%Y-%m-%d") %>% as.character(), end = dates[[1]][2] %>% strptime("%Y-%m-%d") %>% as.character()) %>% filter(!is.na(start) | !is.na(end)) %>% ungroup() %>% select(-c(ind, dates, week)) %>% mutate(start = as.Date(start), score = as.numeric(score), rolling = rollmean(score, 4, fill = NA, align=c("right")))   ggplot(aes(x = start, y = rolling), data = cleanGoogleTrends) + geom_line(size = 0.5) My next step is to plot the Google Trends scores against my meetup data set to see if there’s any interesting correlations going on. As an aside I made use of knitr while putting together this post – it works really well for checking that you’ve included all the steps and that it actually works!Reference: R: Cleaning up and plotting Google Trends data from our JCG partner Mark Needham at the Mark Needham Blog blog....
java-interview-questions-answers

EAGER fetching is a code smell

Introduction Hibernate fetching strategies can really make a difference between an application that barely crawls and a highly responsive one. In this post I’ll explain why you should prefer query based fetching instead of global fetch plans. Fetching 101 Hibernate defines four association retrieving strategies:    Fetching Strategy DescriptionJoin The association is OUTER JOINED in the original SELECT statementSelect An additional SELECT statement is used to retrieve the associated entity(entities)Subselect An additional SELECT statement is used to retrieve the whole associated collection. This mode is meant for to-many associationsBatch An additional number of SELECT statements is used to retrieve the whole associated collection. Each additional SELECT will retrieve a fixed number of associated entities. This mode is meant for to-many associations  These fetching strategies might be applied in the following scenarios:the association is always initialized along with its owner (e.g. EAGER FetchType) the uninitialized association (e.g. LAZY FetchType) is navigated, therefore the association must be retrieved with a secondary SELECTThe Hibernate mappings fetching information forms the global fetch plan. At query time, we may override the global fetch plan, but only for LAZY associations. For this we can use the fetch HQL/JPQL/Criteria directive. EAGER associations cannot be overridden, therefore tying your application to the global fetch plan. Hibernate 3 acknowledged that LAZY should be the default association fetching strategy: By default, Hibernate3 uses lazy select fetching for collections and lazy proxy fetching for single-valued associations. These defaults make sense for most associations in the majority of applications. This decision was taken after noticing many performance issues associated with Hibernate 2 default eager fetching. Unfortunately JPA has taken a different approach and decided that to-many associations be LAZY while to-one relationships be fetched eagerly.Association type Default fetching policy@OneTMany LAZY@ManyToMany LAZY@ManyToOne EAGER@OneToOne EAGER  EAGER fetching inconsistencies While it may be convenient to just mark associations as EAGER, delegating the fetching responsibility to Hibernate, it’s advisable to resort to query based fetch plans. An EAGER association will always be fetched and the fetching strategy is not consistent across all querying techniques. Next, I’m going to demonstrate how EAGER fetching behaves for all Hibernate querying variants. I will reuse the same entity model I’ve previously introduced in my fetching strategies article:The Product entity has the following associations: @ManyToOne(fetch = FetchType.EAGER) @JoinColumn(name = "company_id", nullable = false) private Company company;@OneToOne(fetch = FetchType.LAZY, cascade = CascadeType.ALL, mappedBy = "product", optional = false) private WarehouseProductInfo warehouseProductInfo;@ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "importer_id") private Importer importer;@OneToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL, mappedBy = "product", orphanRemoval = true) @OrderBy("index") private Set<Image> images = new LinkedHashSet<Image>(); The company association is marked as EAGER and Hibernate will always employ a fetching strategy to initialize it along with its owner entity. Persistence Context loading First we’ll load the entity using the Persistence Context API: Product product = entityManager.find(Product.class, productId); Which generates the following SQL SELECT statement: Query:{[ select product0_.id as id1_18_1_, product0_.code as code2_18_1_, product0_.company_id as company_6_18_1_, product0_.importer_id as importer7_18_1_, product0_.name as name3_18_1_, product0_.quantity as quantity4_18_1_, product0_.version as version5_18_1_, company1_.id as id1_6_0_, company1_.name as name2_6_0_ from Product product0_ inner join Company company1_ on product0_.company_id=company1_.id where product0_.id=?][1] The EAGER company association was retrieved using an inner join. For M such associations the owner entity table is going to be joined M times. Each extra join adds up to the overall query complexity and execution time. If we don’t even use all these associations, for every possible business scenario, then we’ve just paid the extra performance penalty for nothing in return. Fetching using JPQL and Criteria Product product = entityManager.createQuery( "select p " + "from Product p " + "where p.id = :productId", Product.class) .setParameter("productId", productId) .getSingleResult(); or with CriteriaBuilder cb = entityManager.getCriteriaBuilder(); CriteriaQuery<Product> cq = cb.createQuery(Product.class); Root<Product> productRoot = cq.from(Product.class); cq.where(cb.equal(productRoot.get("id"), productId)); Product product = entityManager.createQuery(cq).getSingleResult(); Generates the following SQL SELECT statements: Query:{[ select product0_.id as id1_18_, product0_.code as code2_18_, product0_.company_id as company_6_18_, product0_.importer_id as importer7_18_, product0_.name as name3_18_, product0_.quantity as quantity4_18_, product0_.version as version5_18_ from Product product0_ where product0_.id=?][1]}Query:{[ select company0_.id as id1_6_0_, company0_.name as name2_6_0_ from Company company0_ where company0_.id=?][1]} Both JPQL and Criteria queries default to select fetching, therefore issuing a secondary select for each individual EAGER association. The larger the associations number, the more additional individual SELECTS, the more it will affect our application performance. Hibernate Criteria API While JPA 2.0 added support for Criteria queries, Hibernate has long been offering a specific dynamic query implementation. If the EntityManager implementation delegates method calls the the legacy Session API, the JPA Criteria implementation was written from scratch. That’s the reason why Hibernate and JPA Criteria API behave differently for similar querying scenarios. The previous example Hibernate Criteria equivalent looks like this: Product product = (Product) session.createCriteria(Product.class) .add(Restrictions.eq("id", productId)) .uniqueResult(); And the associated SQL SELECT is: Query:{[ select this_.id as id1_3_1_, this_.code as code2_3_1_, this_.company_id as company_6_3_1_, this_.importer_id as importer7_3_1_, this_.name as name3_3_1_, this_.quantity as quantity4_3_1_, this_.version as version5_3_1_, hibernatea2_.id as id1_0_0_, hibernatea2_.name as name2_0_0_ from Product this_ inner join Company hibernatea2_ on this_.company_id=hibernatea2_.id where this_.id=?][1]} This query uses the join fetch strategy as opposed to select fetching, employed by JPQL/HQL and Criteria API. Hibernate Criteria and to-many EAGER collections Let’s see what happens when the image collection fetching strategy is set to EAGER: @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL, mappedBy = "product", orphanRemoval = true) @OrderBy("index") private Set<Image> images = new LinkedHashSet<Image>(); The following SQL is going to be generated: Query:{[ select this_.id as id1_3_2_, this_.code as code2_3_2_, this_.company_id as company_6_3_2_, this_.importer_id as importer7_3_2_, this_.name as name3_3_2_, this_.quantity as quantity4_3_2_, this_.version as version5_3_2_, hibernatea2_.id as id1_0_0_, hibernatea2_.name as name2_0_0_, images3_.product_id as product_4_3_4_, images3_.id as id1_1_4_, images3_.id as id1_1_1_, images3_.index as index2_1_1_, images3_.name as name3_1_1_, images3_.product_id as product_4_1_1_ from Product this_ inner join Company hibernatea2_ on this_.company_id=hibernatea2_.id left outer join Image images3_ on this_.id=images3_.product_id where this_.id=? order by images3_.index][1]} Hibernate Criteria doesn’t automatically groups the parent entities list. Because of the one-to-many children table JOIN, for each child entity we are going to get a new parent entity object reference (all pointing to the same object in our current Persistence Context): product.setName("TV"); product.setCompany(company);Image frontImage = new Image(); frontImage.setName("front image"); frontImage.setIndex(0);Image sideImage = new Image(); sideImage.setName("side image"); sideImage.setIndex(1);product.addImage(frontImage); product.addImage(sideImage);List products = session.createCriteria(Product.class) .add(Restrictions.eq("id", productId)) .list(); assertEquals(2, products.size()); assertSame(products.get(0), products.get(1)); Because we have two image entities, we will get two Product entity references, both pointing to the same first level cache entry. To fix it we need to instruct Hibernate Criteria to use distinct root entities: List products = session.createCriteria(Product.class) .add(Restrictions.eq("id", productId)) .setResultTransformer(CriteriaSpecification.DISTINCT_ROOT_ENTITY) .list(); assertEquals(1, products.size()); Conclusion The EAGER fetching strategy is a code smell. Most often it’s used for simplicity sake without considering the long-term performance penalties. The fetching strategy should never be the entity mapping responsibility. Each business use case has different entity load requirements and therefore the fetching strategy should be delegated to each individual query. The global fetch plan should only define LAZY associations, which are fetched on a per query basis. Combined with the always check generated queries strategy, the query based fetch plans can improve application performance and reduce maintaining costs.Code available for Hibernate and JPA.Reference: EAGER fetching is a code smell from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
apache-maven-logo

Spring MVC 4 Quickstart Maven Archetype Improved

Spring Boot allows getting started with Spring extremely easy. But there are still people interested in not using Spring Boot and bootstrap the application in a more classical way. Several years ago, I created an archetype (long before Spring Boot) that simplifies bootstrapping Spring web applications. Although Spring Boot is already some time on the market, Spring MVC 4 Quickstart Maven Archetype is still quite popular project on GitHub. With some recent additions I hope it is even better.           Java 8 I have decided to switch target platform to Java 8. There is not specific Java 8 code in the generated project yet, but I believe all new Spring projects should be started with Java 8. The adoption of Java 8 is ahead of forecasts. Have a look at: https://typesafe.com/company/news/survey-of-more-than-3000-developers-reveals-java-8-adoption-ahead-of-previous-forecasts Introducing Spring IO Platform Spring IO Platform brings together the core Spring APIs into a cohesive platform for modern applications.. The main advantage is that it simplifies dependency management by providing versions of Spring projects along with their dependencies that are tested and known to work together. Previously, all the dependencies were specified manually and solving version conflicts took some time. With Spring IO platform we must change only platform version (and take care of dependencies outside the platform of course): <dependencyManagement> <dependencies> <dependency> <groupId>io.spring.platform</groupId> <artifactId>platform-bom</artifactId> <version>${io.spring.platform-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> The dependencies can be now used without specifying the version in POM: <!-- Spring --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> </dependency> <!-- Security --> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-web</artifactId> </dependency> Java Security configuration When I firstly created the archetype there was no possibility to configure Spring Security using Java code. But now it is, so I migrated the XML configuration to Java configuration. The SecurityConfig is now extending from WebSecurityConfigurerAdapter and it is marked with @Configuration and @EnableWebMvcSecurity annotations. Security configuration details Restrict access to every URL apart from The XML configuration: <security:intercept-url pattern="/" access="permitAll" /> <security:intercept-url pattern="/resources/**" access="permitAll" /> <security:intercept-url pattern="/signup" access="permitAll" /> <security:intercept-url pattern="/**" access="isAuthenticated()" /> became: http .authorizeRequests() .antMatchers("/", "/resources/**", "/signup").permitAll() .anyRequest().authenticated() Login / Logout The XML configuration: <security:form-login login-page="/signin" authentication-failure-url="/signin?error=1"/> <security:logout logout-url="/logout" /> became: http .formLogin() .loginPage("/signin") .permitAll() .failureUrl("/signin?error=1") .loginProcessingUrl("/authenticate") .and() .logout() .logoutUrl("/logout") .permitAll() .logoutSuccessUrl("/signin?logout"); Remember me The XML configuration: <security:remember-me services-ref="rememberMeServices" key="remember-me-key"/> became: http .rememberMe() .rememberMeServices(rememberMeServices()) .key("remember-me-key"); CSRF enabled for production and disabled for test Currently CSRF is by default enabled, so no additional configuration is needed. But in case of integration tests I wanted to be sure that CSRF is disabled. I could not find a good way of doing this. I started with CSRF protection matcher passed to CsrfConfigurer, but I ended up with lots of code I did not like to have in SecurityConfiguration. I ended up with a NoCsrfSecurityConfig that extends from the original SecurityConfig and disabled CSRF: @Configuration public class NoCsrfSecurityConfig extends SecurityConfig { @Override protected void configure(HttpSecurity http) throws Exception { super.configure(http); http.csrf().disable(); } } Connection pooling HikariCP is now used as default connection pool in the generated application. The default configuration is used: @Bean public DataSource configureDataSource() { HikariConfig config = new HikariConfig(); config.setDriverClassName(driver); config.setJdbcUrl(url); config.setUsername(username); config.setPassword(password); config.addDataSourceProperty("cachePrepStmts", "true"); config.addDataSourceProperty("prepStmtCacheSize", "250"); config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048"); config.addDataSourceProperty("useServerPrepStmts", "true");return new HikariDataSource(config); } More to come Spring MVC 4 Quickstart Maven Archetype is far from finished. As the Spring platform involves the archetype must adjust accordingly. I am looking forward to hear what could be improved to make it a better project. If have an idea or suggestion drop a comment or create an issue on GitHub. ReferencesSpring MVC 4 Quickstart Maven ArchetypeReference: Spring MVC 4 Quickstart Maven Archetype Improved from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close