Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Test Attribute #8 – Truthiness

I want to thank Steven Colbert for coining a word I can use in my title. Without him, all this would still be possible, had I not given up looking for a better word after a few minutes. Tests are about trust. We expect them to be reliable. Reliable tests tell us everything is ok when they pass, and that something is wrong when they fail. The problem is that life is not black and white, and tests are not just green and red. Tests can give false positive (fail when they shouldn’t) or false negative (pass when they shouldn’t) results. We’ve encountered the false positive ones before – these are the fragile, dependent tests. The ones that pass, instead of failing, are the problematic ones. They hide the real picture from us, and erode our trust, not just in those tests, but also in others. After all, when we find out a problematic tests, who can say the others we wrote are not problematic as well? Truthiness (how much we feel the tests are reliable) comes into play. Dependency Injection Example Or rather, injecting an example of a dependency. Let’s say we have a service (or 3rd party library) our tested code uses. It’s slow and communication is unreliable. All the things that give services a bad name. Our natural tendency is to mock the service in the test. By mocking the service, we can test our code in isolation. So, in our case, our tested Hotel class uses a Service: public class Hotel { public string GetServiceName(Service service) { var result = service.GetName(); return "Name: " + result; } } To know if the method works correctly, we’ll write this test: [TestMethod]public void GetServiceName_RoomService_NameIsRoom() { var fakeService = A.Fake<Service>(); A.CallTo(() => fakeService.GetName()).Returns("Room");var hotel = new Hotel(); Assert.AreEqual("Name: Room", hotel.GetServiceName(fakeService)); } And everything is groovy. Until, in production, the service gets disconnected and throws an exception. And our test says “B-b-b-but, I’m still passing!”. The Truthiness Is Out There Mocking is an example of obstructing the real behavior by prescriptive tests, but it’s just an example. It can happen when we test a few cases, but don’t cover others. Here’s one of my favorite examples. What’s the hidden test case here? public int Increment() { return counter++; } Tests are code examples. They work to the extent of our imagination of “what can go wrong?” Like overflow, in the last case. Much like differentiation, truthiness can not be examined by itself. The example works, but it hides a case we need another test for. We need to look at the collection of test cases, and see if we covered everything. The solution doesn’t have to be a test of the same type. We can have a unit test for the service happy path, and an end-to-end test to cover the disconnection case. Of course, if you can think of other cases in the first place, why not unit test them? So to level up your truthiness:Ideate. Before writing the tests, and if you’re doing TDD – the code, write a list of test cases. On a notebook, a whiteboard, or my favorite: empty tests. Reflect. Often, when we write a few test, new test cases come to mind. Having a visual image of the code can help think of other cases. Beware the mock. We use mocks to prescribe dependency behavior in specific cases. Every mock you make can be a potential failure point, so think about other cases to mock. Review. Do it in pairs. Four eyes are better than two.Aim for higher truthiness. Higher trust in your tests will help you sleep better.Reference: Test Attribute #8 – Truthiness from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
apache-hadoop-logo

Introducing Hadoop Development Tools

Few days back Apache Hadoop Development Tools a.k.a. HDT was released.  The projects aims at bringing plugins in eclipse to simplify development on Hadoop platform. This blog aims to provide an overview of few great features of HDT. Single Endpoint The project can act as a single endpoint for your HDFS, Zookeeper and MR Cluster. You can connect to your HDFS/Zookeeper instance and browse or add more data. You can submit jobs to MR cluster and see status of all the running jobs.  Map Reduce Project/Templates There is support for creating Hadoop project. Just point is to the location of Hadoop, it will pull down all the required libs and generate a eclipse project. That’s not all, you could generate Mapper/Reducer/Partitioner/Driver based on the org.apache.hadoop.mapreduce API.Multiple Version Support Currently the projects supports two versions of the Hadoop platform viz 1.1 and 2.2. The project is based on eclipse plugin architecture and can  possibly support other versions like 0.23, CDH4 etc in next releases. Eclipse Support The project works with eclipse 3.6 and above. It has been tested on Indigo and Juno, and can work on Kepler as well. The projects aims to simplify the Hadoop platform for developers. It is still young and would require support from community to flourish. To learn more or to get involved with the project check the project page or the mailing lists.Reference: Introducing Hadoop Development Tools from our JCG partner Rahul Sharma at the The road so far… blog blog....
java-logo

Getting A List of Available Cryptographic Algorithms

How do you learn what cryptographic algorithms are available to you? The Java spec names several required ciphers, digests, etc., but a provider often offers more than that. Fortunately this is easy to learn what’s available on our system.           public class ListAlgorithms { public static void main(String[] args) { // Security.addProvider(new // org.bouncycastle.jce.provider.BouncyCastleProvider());// get a list of services and their respective providers. final Map<String, List<Provider>> services = new TreeMap<>();for (Provider provider : Security.getProviders()) { for (Provider.Service service : provider.getServices()) { if (services.containsKey(service.getType())) { final List<Provider> providers = services.get(service .getType()); if (!providers.contains(provider)) { providers.add(provider); } } else { final List<Provider> providers = new ArrayList<>(); providers.add(provider); services.put(service.getType(), providers); } } }// now get a list of algorithms and their respective providers for (String type : services.keySet()) { final Map<String, List<Provider>> algs = new TreeMap<>(); for (Provider provider : Security.getProviders()) { for (Provider.Service service : provider.getServices()) { if (service.getType().equals(type)) { final String algorithm = service.getAlgorithm(); if (algs.containsKey(algorithm)) { final List<Provider> providers = algs .get(algorithm); if (!providers.contains(provider)) { providers.add(provider); } } else { final List<Provider> providers = new ArrayList<>(); providers.add(provider); algs.put(algorithm, providers); } } } }// write the results to standard out. System.out.printf("%20s : %s\n", "", type); for (String algorithm : algs.keySet()) { System.out.printf("%-20s : %s\n", algorithm, Arrays.toString(algs.get(algorithm).toArray())); } System.out.println(); } } } The system administrator can override the standard crypto libraries. In practice it’s safest to always load your own crypto library and either register it manually, as above, or better yet pass it as an optional parameter when creating new objects. Algorithms There are a few dozen standard algorithms. The ones we’re most likely to be interested in are: Symmetric CipherKeyGenerator – creates symmetric key SecretKeyFactor – converts between symmetric keys and raw bytes Cipher – encryption cipher AlgorithmParameters – algorithm parameters AlgorithmParameterGernerator – algorithm parametersAsymmetric CipherKeyPairGenerator – creates public/private keys KeyFactor – converts between keypairs and raw bytes Cipher – encryption cipher Signature – digital signatures AlgorithmParameters – algorithm parameters AlgorithmParameterGernerator – algorithm parametersDigestsMessageDigest – digest (MD5, SHA1, etc.) Mac – HMAC. Like a message digest but requires an encryption key as well so it can’t be forged by attackerCertificates and KeyStoresKeyStore – JKS, PKCS, etc. CertStore – like keystore but only stores certs. CertificateFactory – converts between digital certificates and raw bytes.It is critical to remember that most algorithms are provided for backward compatibility and should not be used for in greenfield development. As I write this the generally accepted advice is:Use a variant of AES. Only use AES-ECB if you know with absolute certainty that you will never encrypt more than one blocksize (16 bytes) of data. Always use a good random IV even if you’re using AES-CBC. Do not use the same IV or an easily predicted one. Do not use less than 2048 bits in an asymmetric key. Use SHA-256 or better. MD-5 is considered broken, SHA-1 will be considered broken in the near future. Use PBKDF2WithHmacSHA1 to create AES key from passwords/passphrases. (See also Creating Password-Based Encryption Keys.)Some people might want to use one of the other AES-candidate ciphers (e.g., twofish). These ciphers are probably safe but you might run into problems if you’re sharing files with other parties since they’re not in the required cipher suite. Beware US Export Restrictions Finally it’s important to remember that the standard Java distribution is crippled due to US export restrictions. You can get full functionality by installing a standard US-only file on your system but it’s hard if not impossible for developers to verify this has been done. In practice many if not most people use a third-party cryptographic library like BouncyCastle. Many inexperienced developers forget about this and unintentionally use crippled functionality.Reference: Getting A List of Available Cryptographic Algorithms from our JCG partner Bear Giles at the Invariant Properties blog....
apache-lucene-logo

A new proximity query for Lucene, using automatons

The simplest Apache Lucene query, TermQuery, matches any document that contains the specified term, regardless of where the term occurs inside each document. Using BooleanQuery you can combine multiple TermQuerys, with full control over which terms are optional (SHOULD) and which are required (MUST) or required not to be present (MUST_NOT), but still the matching ignores the relative positions of each term inside the document. Sometimes you do care about the positions of the terms, and for such cases Lucene has various so-called proximity queries.     The simplest proximity query is PhraseQuery, to match a specific sequence of tokens such as “Barack Obama”. Seen as a graph, a PhraseQuery is a simple linear chain:By default the phrase must precisely match, but if you set a non-zero slop factor, a document can still match even when the tokens are not exactly in sequence, as long as the edit distance is within the specified slop. For example, “Barack Obama” with a slop factor of 1 will also match a document containing “Barack Hussein Obama” or “Barack H. Obama”. It looks like this graph:Now there are multiple paths through the graph, including an any (*) transition to match an arbitrary token. (Note: while the graph cannot properly express it, this query would also match a document that had the tokens Barack and Obama on top of one another, at the same position, which is a little bit strange!) In general, proximity queries are more costly on both CPU and IO resources, since they must load, decode and visit another dimension (positions) for each potential document hit. That said, for exact (no slop) matches, using common-grams, shingles and ngrams to index additional “proximity terms” in the index can provide enormous performance improvements in some cases, at the expense of an increase in index size. MultiPhraseQuery is another proximity query. It generalizes PhraseQuery by allowing more than one token at each position, for example:This matches any document containing either domain name system or domain name service. MultiPhraseQuery also accepts a slop factor to allow for non-precise matches. Finally, span queries (e.g.SpanNearQuery, SpanFirstQuery) go even further, allowing you to build up a complex compound query based on positions where each clause matched. What makes them unique is that you can arbitrarily nest them. For example, you could first build a SpanNearQuery matching Barack Obama with slop=1, then another one matching George Bush, and then make another SpanNearQuery, containing both of those as sub-clauses, matching if they appear within 10 terms of one another. Introducing TermAutomatonQuery As of Lucene 4.10 there will be a new proximity query to further generalize on MultiPhraseQuery and the span queries: it allows you to directly build an arbitrary automaton expressing how the terms must occur in sequence, including any transitions to handle slop. Here’s an example:This is a very expert query, allowing you fine control over exactly what sequence of tokens constitutes a match. You build the automaton state-by-state and transition-by-transition, including explicitly adding any transitions (sorry, no QueryParser support yet, patches welcome!). Once that’s done, the query determinizes the automaton and then uses the same infrastructure (e.g.CompiledAutomaton) that queries like FuzzyQuery use for fast term matching, but applied to term positions instead of term bytes. The query is naively scored like a phrase query, which may not be ideal in some cases. In addition to this new query there is also a simple utility class, TokenStreamToTermAutomatonQuery, that provides loss-less translation of any graph TokenStream into the equivalent TermAutomatonQuery. This is powerful because it means even arbitrary token stream graphs will be correctly represented at search time, preserving the PositionLengthAttribute that some tokenizers now set. While this means you can finally correctly apply arbitrary token stream graph synonyms at query-time, because the index still does not store PositionLengthAttribute, index-time synonyms are still not fully correct. That said, it would be simple to build a TokenFilter that writes the position length into a payload, and then to extend the new TermAutomatonQuery to read from the payload and apply that length during matching (patches welcome!). The query is likely quite slow, because it assumes every term is optional; in many cases it would be easy to determine required terms (e.g. Obama in the above example) and optimize such cases. In the case where the query was derived from a token stream, so that it has no cycles and does not use any transitions, it may be faster to enumerate all phrases accepted by the automaton (Lucene already has the getFiniteStrings API to do this for any automaton) and construct a boolean query from those phrase queries. This would match the same set of documents, also correctly preserving PositionLengthAttribute, but would assign different scores. The code is very new and there are surely some exciting bugs! But it should be a nice start for any application that needs precise control over where terms occur inside documents.Reference: A new proximity query for Lucene, using automatons from our JCG partner Michael Mc Candless at the Changing Bits blog....
jboss-wildfly-logo

Spring Batch as Wildfly Module

For a long time, the Java EE specification was lacking a Batch Processing API. Today, this is an essential necessity for enterprise applications. This was finally fixed with the JSR-352 Batch Applications for the Java Platform now available in Java EE 7. The JSR-352 got it’s inspiration from the Spring Batch counterpart. Both cover the same concepts, although the resulting API’s are a bit different. Since the Spring team also collaborated in the JSR-352, it was only a matter of time for them to provide an implementation based on Spring Batch. The latest major version of Spring Batch (version 3), now supports the JSR-352. I’m a Spring Batch user for many years and I’ve always enjoyed that the technology had a interesting set of built-in readers and writers. These allowed you to perform the most common operations required by batch processing. Do you need to read data from a database? You could use JdbcCursorItemReader, how about writing data in a fixed format? Use FlatFileItemWriter, and so on. Unfortunately, JSR-352 implementations do not have the amount of readers and writers available in Spring Batch. We have to remember that JSR-352 is very recent and didn’t have time to catch up. jBeret, the Wildfly implementation for JSR-352 already provides a few custom readers and writers. What’s the point? I was hoping that with the latest release, all the readers and writers from the original Spring Batch would be available as well. This is not the case yet, since it would take a lot of work, but there are plans to make them available in future versions. This would allow us to migrate native Spring Batch applications into JSR-352. We still have the issue of the implementation vendor lock-in, but it may be interesting in some cases. Motivation I’m one of the main test contributors for the Java EE Samples in the JSR-352 specification. I wanted to find out if the tests I’ve implemented have the same behaviour using the Spring Batch implementation. How can we do that? Code I think this exercise is not only interesting because of the original motivation, but it’s also useful to learn about modules and class loading on Wildfly. First we need to decide how are we going to deploy the needed Spring Batch dependencies. We could deploy them directly with the application, or use a Wildfly module. Modules have the advantage to be bundled directly into the application server and can be reused by all deployed applications. Adding Wildfly Module with Maven With a bit of work it’s possible to add the module automatically with the Wildfly Maven Plugin and the CLI (command line). Let’s start to create two files that represent the CLI commands that we need to create and remove the module: wildfly-add-spring-batch.cli wildfly-add-spring-batch.cli # Connect to Wildfly instance connect# Create Spring Batch Module # If the module already exists, Wildfly will output a message saying that the module already exists and the script exits. module add \ --name=org.springframework.batch \ --dependencies=javax.api,javaee.api \ --resources=${wildfly.module.classpath} The module --name is important. We’re going to need it to reference it in our application. The --resources is a pain, since you need to indicate a full classpath to all the required module dependencies, but we’re generating the paths in the next few steps. wildfly-remove-spring-batch.cli wildfly-remove-spring-batch.cli # Connect to Wildfly instance connect# Remove Oracle JDBC Driver Module module remove --name=org.springframework.batch Note wildfly.module.classpath. This property will hold the complete classpath for the required Spring Batch dependencies. We can generate it with Maven Dependency plugin: pom-maven-dependency-plugin.xml<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>${version.plugin.dependency}</version> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>build-classpath</goal> </goals> <configuration> <outputProperty>wildfly.module.classpath</outputProperty> <pathSeparator>:</pathSeparator> <excludeGroupIds>javax</excludeGroupIds> <excludeScope>test</excludeScope> <includeScope>provided</includeScope> </configuration> </execution> </executions> </plugin> This is going to pick all dependencies (including transitive), exclude javax (since they are already present in Wildfly) and exclude test scope dependencies. We need the following dependencies for Spring Batch: pom-dependencies.xml <!-- Needed for Wildfly module --> <dependency> <groupId>org.springframework.batch</groupId> <artifactId>spring-batch-core</artifactId> <version>3.0.0.RELEASE</version> <scope>provided</scope> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>4.0.5.RELEASE</version> <scope>provided</scope> </dependency><dependency> <groupId>commons-dbcp</groupId> <artifactId>commons-dbcp</artifactId> <version>1.4</version> <scope>provided</scope> </dependency><dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>2.3.2</version> <scope>provided</scope> </dependency> Now, we need to replace the property in the file. Let’s use Maven Resources plugin: pom-maven-resources-plugin.xml <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-resources-plugin</artifactId> <version>${version.plugin.resources}</version> <executions> <execution> <id>copy-resources</id> <phase>process-resources</phase> <goals> <goal>copy-resources</goal> </goals> <configuration> <outputDirectory>${basedir}/target/scripts</outputDirectory> <resources> <resource> <directory>src/main/resources/scripts</directory> <filtering>true</filtering> </resource> </resources> </configuration> </execution> </executions> </plugin> This will filter the configured files and replace the property wildfly.module.classpath with the value we generated previously. This is a classpath pointing to the dependencies in your local Maven repository. Now with Wildfly Maven Plugin we can execute this script (you need to have Wildfly running): pom-maven-wildfly-plugin.xml <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>${version.plugin.wildfly}</version> <configuration> <skip>false</skip> <executeCommands> <batch>false</batch> <scripts> <!--suppress MavenModelInspection --> <script>target/scripts/${cli.file}</script> </scripts> </executeCommands> </configuration> </plugin> And these profiles: pom-profiles.xml <profiles> <profile> <id>install-spring-batch</id> <properties> <cli.file>wildfly-add-spring-batch.cli</cli.file> </properties> </profile><profile> <id>remove-spring-batch</id> <properties> <cli.file>wildfly-remove-spring-batch.cli</cli.file> </properties> </profile> </profiles> (For the full pom.xml contents, check here) We can add the module by executing: mvn process-resources wildfly:execute-commands -P install-spring-batch. Or remove the module by executing: mvn wildfly:execute-commands -P remove-spring-batch. This strategy works for any module that you want to create into Wildfly. Think about adding a JDBC driver. You usually use a module to add it into the server, but all the documentation I’ve found about this is always a manual process. This works great for CI builds, so you can have everything you need to setup your environment. Use Spring-Batch Ok, I have my module there, but how can I instruct Wildfly to use it instead of jBeret? We need to add a the following file in META-INF folder of our application: jboss-deployment-structure.xml jboss-deployment-structure.xml <?xml version="1.0" encoding="UTF-8"?> <jboss-deployment-structure> <deployment> <exclusions> <module name="org.wildfly.jberet"/> <module name="org.jberet.jberet-core"/> </exclusions><dependencies> <module name="org.springframework.batch" services="import" meta-inf="import"/> </dependencies> </deployment> </jboss-deployment-structure> Since the JSR-352 uses a Service Loader to load the implementation, the only possible outcome would be to load the service specified in org.springframework.batch module. Your batch code will now run with the Spring Batch implementation. Testing The github repository code, has Arquillian sample tests that demonstrate the behaviour. Check the Resources section below. Resources You can clone a full working copy from my github repository. You can find instructions there to deploy it. Wildfly – Spring Batch Since I may modify the code in the future, you can download the original source of this post from the release 1.0. In alternative, clone the repo, and checkout the tag from release 1.0 with the following command: git checkout 1.0. Future I’ve still need to apply this to the Java EE Samples. It’s on my TODO list.Reference: Spring Batch as Wildfly Module from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....
software-development-2-logo

Writing Tests for Data Access Code – Don’t Forget the Database

When we write tests for our data access code, we must follow these three rules:                    Our tests must use the real database schema. Our tests must be deterministic. Our tests must assert the right thing.These rules are obvious. That is why it is surprising that some developers break them (I have broken them too in the past). This blog post describes why these rules are important and helps us to follow them. Rule 1: We Must Use the Real Database Schema The second part of this series taught us that we should configure our integration tests by using the same configuration which is used by our application. It also taught us that is ok to break this rule if we have a good reason for doing it. Let’s investigate one quite common situation where our integration tests use a different configuration than our application. We can create our database by following this approach:We create the database of our application by using Liquibase. We use its Spring integration to make the required changes to the database when the application was started. We let Hibernate to create the database used in our integration tests.I have done this too, and this felt like a perfect solution becauseI could enjoy the benefits of a versioned database. Writing integration tests felt like a walk in a park because I could trust that Hibernate creates a working database for my integration tests.However, after I started writing blog this tutorial (Writing Tests for Data Access Code), I realized that this approach has (at least) three problems:If the database is created by Hibernate, we cannot test that our migration scripts create a working database. The database created by Hibernate isn’t necessarily equal to the database created by our migration scripts. For example, if the database has tables which aren’t described as entities, Hibernate doesn’t (naturally) create these tables. If we want to run performance tests in the integration test suite, we have configure the required indexes by using the @Index annotation. If we don’t do this, Hibernate doesn’t create these indexes. This means that we cannot trust the results of our performance tests.Should we care about these problems? Definitely. We must remember that every test specific change creates a difference between our test configuration and production configuration. If this difference is too big, our tests are worthless. If we don’t run our integration tests against the same database schema that is used when the application is deployed to development / testing / production environment, we face the following problems:We cannot necessarily write integration tests for certain features because our database is missing the required tables, triggers, constraints, or indexes. This means that we must test these features manually before the application the deployed to the production environment. This is a waste of time. The feedback loop is a lot longer than it could be because we notice some problems (such as problems caused by faulty migration scripts) after the application is deployed to the target environment. If we notice a problem when an application is deployed to a production environment, the shit hits the fan and we are covered with it. I don’t like to be covered with poop. Do you?If we want to avoid these problems and maximize the benefits of our data access tests, our integration tests must use the same database schema that is used when our application is deployed to the production environment. Rule 2: Our Tests Must be Deterministic Martin Fowler specifies non-deterministic test as follows: A test is non-deterministic when it passes sometimes and fails sometimes, without any noticeable change in the code, tests, or environment. Such tests fail, then you re-run them and they pass. Test failures for such tests are seemingly random. He also explains why non-deterministic tests are a problem: The trouble with non-deterministic tests is that when they go red, you have no idea whether its due to a bug, or just part of the non-deterministic behavior. Usually with these tests a non-deterministic failure is relatively common, so you end up shrugging your shoulders when these tests go red. Once you start ignoring a regression test failure, then that test is useless and you might as well throw it away. It should be clear to us that non-deterministic tests are harmful, and we should we should avoid them at all costs. So, what is the most common cause of non-deterministic data access tests? My experience has taught me that most common reason behind non-deterministic data access tests is the failure to initialize the database into a known state before each test case is run. This is sad because this is a really easy problem to solve. In fact, we can solve it by using one of these options:We can add information to the database by using the other methods of the tested repository. We can write a library that initializes our database before each test is run. We can use existing libraries such as DbUnit and NoSQLUnit.However, we must be careful because only of these options makes sense. The first option is the worst way to solve this problem. It clutters our test methods with unnecessary initialization code and makes them very fragile. For example, if we break the method which is used to save information to our database, every test which uses it will fail. The second option is a bit better. However, why would we want to create a new library when we could use an existing library that is proven to work? We should not reinvent the wheel. We should solve this problem by using the easiest and the best way. We must use an existing library. Rule 3: We Must Assert the Right Thing When we write tests for our data access code, we might have to write tests thatread information from the database. write information to the information to the database.What kind of assertions do we have to write? First, if the write tests that read information from the database, we have to follow these rules:If we are using a framework or a library (e.g. Spring Data) that maps the information found from the database to objects, it makes no sense to assert that every property value of the returned object is correct. In this situation we should ensure that value of the property, which identifies the returned object, is correct. The reason for this that we should only use frameworks or libraries which we trust. If we trust that our data access framework or library does its job, it makes no sense assert everything. If we have implemented a repository that maps the information found from the database to objects, we should ensure that the every property value of the returned object is correct. If we don’t do this, we cannot be sure that our repository works correctly.Second, if we write tests which writes information to the database, we should not add any assertions to our test method. We must use a tool like DbUnit or NoSQLUnit to ensure that the correct information is stored to the database. This approach has two benefits:We can write our assertions on the right level. In other words, we can be verify that the information is really saved to the used database. We can avoid cluttering our test methods with code that finds the saved information from the used database and verifies that the correct information is found.But what if we want to ensure that the method that saves information to the database returns the correct information? Well, if we have implemented this method ourself, we have to write two tests for this method:We must ensure that the correct information is stored to the database. We must verify that the method returns the correct information.On the other hand, if this method is provided to us by a framework or library, we should not write any tests for it. We must remember that our goal is not to write assertions that ensure the used data access framework or library is working correctly. Our goal is to write assertions which ensure that our code is working correctly. Summary This blog post has taught us four things:If we want to maximize the benefits of our data access tests, our integration tests must use the same database schema that is used when our application is deployed to the production environment. Getting rid of non-deterministic tests is easy. All we have to do is to initialize our database into a known state before each test case is run by using a library such as DbUnit or NoSQLUnit. If we need to verify that the correct information is saved to the used database, we must use a library such as DbUnit or NoSQLUnit. If we want to verify that the correct information is returned from the used database, we must write assertions that ensure that our code works.Reference: Writing Tests for Data Access Code – Don’t Forget the Database from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
jboss-wildfly-logo

Getting started with SwitchYard 2.0.0.Alpha1 on WildFly 8.1.0.Final

I’ve been sticking my head into some hot RedHat technologies lately and among the many interesting parts I found SwitchYard. Without being disrespectful towards everybody wrapping their heads around SOA and service oriented architectures in the past, this has always been kind of weird to me as a Java EE developer. In the past I’ve been building component oriented applications with what I had at hand. Mostly driven by the features available in the Java EE standard to be “portable” and easy to use. Looking back this has been a perfect fit for many customers and applications. With an increasing demand for highly integrated applications which use already available services and processes from all over the place (departmental, central or even cloud services) this approach starts to feel more and more outdated. And this feel does not come from a technology perspective but from all the requirements around it. Having this in mind this post is the starting point of a series of how-to’s and short tutorials which aim to showcase some more diverse ways of building (Java EE) applications that fit better into today’s requirements and landscapes. What is SwitchYard? It is a component-based development framework for integration applications using the design principles and best practices of Service Oriented Architecture. If you’re expecting kind of a full-blown fancy BPMN/SOA buzz-word suite you’re off by a bit. This is for developers and should make it comparably straight forward to use. It’s been around for a while now and starting with latest 2.0.0.Alpha1 it is compatible with WildFly 8. Reasons enough for me to get you excited about it. Installing SwitchYard into latest WildFly 8.1.0.Final Download both, the switchyard-2.0.0.Alpha1-wildfly bundle and WildFly 8.1.0.Final from the project websites. Install WildFly 8 by unzipping it into a folder of your choice (e.g. D:\wildfly-8.1.0.Final\). Now unzip the SwitchYard bundle into the WildFly folder. Depending on the zip utility in use, you may be prompted whether existing files should be replaced.  Answer yes/all for all files being unzipped. It’s an alpha so you have to tweak the configuration a bit because of SWITCHYARD-2158. Open “JBOSS_HOME/standalone/configuration/standalone.xml” and search for “org.switchyard.component.camel.atom.deploy.CamelRSSComponent” and change the package from “atom” to “rss”. Now go ahead and start the server with “JBOSS_HOME/bin/standalone.sh/.bat”. If everything worked correctly you should see a message like this: 09:18:25,857 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.1.0.Final "Kenny" started in 3712ms - Started 210 of 259 services (81 services are lazy, passive or on-demand) Building and Deploying the Bean Service Quickstart If you want to get your hands dirty you can easily start with the packaged applications in the “JBOSS_HOME/quickstarts/” directory of the distribution. A simple one is the bean-service example. It makes use of one of the core components of SwitchYard, the Bean Component. It allows Java classes (or beans) to provide and consume services. And therefore you can implement a service by simply annotating a Java class or consume one by injecting a reference directly into your Java class. And because the Bean Component is a standard CDI Extension, there is no need to learn a new programming model to use it. It’s just a standard CDI Bean with a few more annotations. For existing Java EE applications this means, that you can expose existing CDI-based beans in your application as services to the outside world or consume services within your bean by just adding some more annotations. First things first. We need to tweak around a bit in the project pom.xml to make this work. Go to the build section and replace the “jboss-as-maven-plugin” with the latest version of the: <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>1.0.2.Final</version> Now run “mvn package” to download all dependencies and execute the tests. It should just work fine and state: Tests run: 6, Failures: 0, Errors: 0, Skipped: 0 Let’s deploy it to our WildFly instance by issuing “mvn -Pdeploy install”. The WildFly console finally lets you know about the successful execution: 10:19:44,636 INFO [org.jboss.as.server] (management-handler-thread - 1) JBAS018559: Deployed "switchyard-bean-service.jar" (runtime-name : "switchyard-bean-service.jar") Quick Test For The Application A very quick test is to execute mvn exec:java which will execute the BeanClient class and fire a SOAP request towards the deployed service. The output should be: SOAP Reply: <soap:envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope"><env:header xmlns:env="http://www.w3.org/2003/05/soap-envelope"></env:header><soap:body><ord ers:submitorderresponse="" xmlns:orders="urn:switchyard-quickstart:bean-service:1.0 "><orderack><orderid>PO-19838-XYZ</orderid><accepted>true</accepted><status>Orde r Accepted [intercepted]</status></orderack></ord></soap:body></soap:envelope> That is it for today. The next parts will examine the sample application in a bit more detail and install the tooling and introduce you to various other components. If you can’t wait, check out:the SwitchYard Documentation which contains a whole bunch of useful stuff. some awesome videos and learn all about SwitchYard in our new SwitchYard Video Series. the other Quickstart applications.Reference: Getting started with SwitchYard 2.0.0.Alpha1 on WildFly 8.1.0.Final from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
javascript-logo

JavaScript Promises Are Cool

“And when I promise something, I never ever break that promise. Never.” ― Rapunzel Many languages have libraries of interesting schemes called promises, deferreds, or futures. Those help to tame the wild asynchronous into something more like the mundane sequential. JavaScript promises can promote separation of concerns in lieu of tightly coupled interfaces. This article is about JavaScript promises of the Promises/A variety. [http://wiki.commonjs.org/wiki/Promises/A]   Promise use cases:Executing rules Multiple remote validations Handling timeouts Remote data requests Animation Decoupling event logic from app logic Eliminating callback triangle of doom Controlling parallel asynchronous operationsA JavaScript promise is an I.O.U. to return a value in the future. It is a data object having a well-defined behavior. A promise has one of three possible states:Pending Rejected ResolvedA rejected or resolved promise is settled. A promise state can only move from pending to settled. Thereafter its state is immutable. A promise can be held long after its associated action has settled. At leisure, we can extract the result multiple times. We carry this out by calling promise.then(). That call won’t return unless, or until, the associated action has been settled. We can fluidly chain promises. The chained “then” functions should each either return a promise, or let the original promise be the return value. Through this paradigm we can write asynchronous code more as if it were synchronous code. The power lies in composing promise tasks:Stacked tasks: multiple thens scattered in the code, but on same promise. Parallel tasks: multiple promises return a single promise. Sequential tasks: promise … then … promise Combinations of these.Why this extra layer? Why can’t we just use raw callbacks? Problems with callbacks Callbacks are fine for reacting to simple recurring events such as enabling a form value based on a click, or for storing the result of a REST call. Callbacks also entice one to code in a chain by having one callback make the next REST call, in turn supplying a callback that makes the next REST call, and so forth. This tends toward a pyramid of doom shown in Figure 1. There, code grows horizontally faster than it grows vertically. Callbacks seem simple … until we need a result, right now, to use in the next step of our code.  'use strict'; var i = 0; function log(data) {console.log('%d %s', ++i, data); };function validate() { log("Wait for it ..."); // Sequence of four Long-running async activities setTimeout(function () { log('result first'); setTimeout(function () { log('result second'); setTimeout(function () { log('result third'); setTimeout(function () { log('result fourth') }, 1000); }, 1000); }, 1000); }, 1000);}; validate(); In Figure 1, I’ve used timeouts to mock asynchronous actions. The notion of managing exceptions there that could play controllably with downstream actions is painful. When we have to compose callbacks, then code organization becomes messy. Figure 2 shows a mock validation flow that will run, when pasted into a NodeJS REPL. We will migrate it from the pyramid-of-doom pattern to a sequential promise rendition in the next section. 'use strict'; var i = 0; function log(data) {console.log('%d %s', ++i, data); };// Asynchronous fn executes a callback result fn function async(arg, callBack) { setTimeout(function(){ log('result ' + arg); callBack(); }, 1000); };function validate() { log("Wait for it ..."); // Sequence of four Long-running async activities async('first', function () { async('second',function () { async('third', function () { async('fourth', function () {}); }); }); }); }; validate(); Execution in a NodeJS REPL yields: $ node scripts/examp2b.js 1 Wait for it ... 2 result first 3 result second 4 result third 5 result fourth $ I once had a dynamic validation situation in AngularJS where form values could be dynamically mandatory, depending on peer form values. A REST service ruled on the valid value of each mandatory item. I avoided nested callbacks by writing a dispatcher that operated on a stack of functions based on which values were required. The dispatcher would pop a function from the stack and execute it. That function’s callback would finish by calling my dispatcher to repeat until the stack emptied. Each callback recorded any validation error returned from its remote validation call. I consider my contraption to be an anti-pattern. Had I used the promise alternative offered by Angular’s $http call, my thinking for the entire validation would have resembled a linear form resembling synchronous programming. A flattened promise chain is readable. Read on … Using Promises Figure 3 shows my contrived validation recast into a promise chain. It uses the kew promise library. The Q library works equally well. To try it, first use npm to import the kew library into NodeJS, and then load the code into the NodeJS REPL. 'use strict'; var Q = require('kew'); var i = 0;function log(data) {console.log('%d %s', ++i, data); };// Asynchronous fn returns a promise function async(arg) { var deferred = Q.defer(); setTimeout(function () { deferred.resolve('result ' + arg);\ }, 1000); return deferred.promise; };// Flattened promise chain function validate() { log("Wait for it ..."); async('first').then(function(resp){ log(resp); return async('second'); }) .then(function(resp){ log(resp); return async('third') }) .then(function(resp){ log(resp); return async('fourth'); }) .then(function(resp){ log(resp); }).fail(log); }; validate(); The output is the same as that of the nested callbacks: $ node scripts/examp2-pflat.js 1 Wait for it ... 2 result first 3 result second 4 result third 5 result fourth $ The code is slightly “taller,” but I think it is easier to understand and modify. Adding rational error handling is easier. The fail call at the end of the chain catches errors within the chain, but I could have also supplied a reject handler at any then to deal with a rejection in its action. Server or Browser Promises are useful in a browser as well as in a NodeJS server. The following URL, http://jsfiddle.net/mauget/DnQDx/, points to a JSFiddle that shows how to use a single promise in a web page. All the code is modifiable in the JSFiddle. One variation of the browser output is in Figure 4. I rigged the action to randomly reject. You could try it several times to get an opposite result. It would be straightforward to extend it to a multiple promise chain, as in the previous NodeJS example.  Parallel Promises Consider an asynchronous operation feeding another asynchronous action. Let the latter consist of three parallel asynchronous actions that, in turn, feed a final action. It settles only when all parallel child requests settle. See Figure 5. This is inspired from a favorable encounter with a chain of twelve MongoDB operations. Some were eligible to operate parallel. I implemented the flow with promises.  How would we model those parallel promises at the center row of that diagram? The key is that most promise libraries have an all function that produces a parent promise of child promises held in an array. When all the child promises resolve, the parent promise resolves. If one of the child promises rejects, the parent promise rejects. Figure 6 shows a code fragment that makes ten literals into a promise of ten parallel promises. The then at the end completes only when all ten children resolve or if any child rejects. var promiseVals = ['To ', 'be, ', 'or ', 'not ', 'to ', 'be, ', 'that ', 'is ', 'the ', 'question.'];var startParallelActions = function (){ var promises = [];// Make an asynchronous action from each literal promiseVals.forEach(function(value){ promises.push(makeAPromise(value)); });// Consolidate all promises into a promise of promises return Q.all(promises); };startParallelActions ().then( . . . The following URL, http://jsfiddle.net/mauget/XKCy2/, targets a JSFiddle that runs 10 parallel promises in a browser, rejecting or resolving at random. The complete code is there for inspection and what-if changes. Rerun until you get an opposite completion. Figure 7 shows the positive result.  Birthing a Promise Many APIs return a promise having a then function – they’re thenable. Normally I could just chain a then to a thenable function’s result. Otherwise, the $q, mpromise, Q, and kew libraries have a simple API used to create, reject, or resolve a promise. There are links to API documentation for each library in the references section. I’ve not usually needed to construct a promise, except for wrapping promise-ignorant literals and timeout functions for this article. See the examples where I created promises. Promise Library Interoperation Most JavaScript promise libraries interoperate at the then level. You can create a promise from a foreign promise because a promise can wrap any kind of value. This works across libraries that support then. There are disparate promise functions aside from then. If you need a function that your library doesn’t include, you can wrap a promise from your library in a new promise from a library that has your desired function. For example, JQuery promises are sometimes maligned in the literature. You could wrap each right away in a Q, $q, mpromise, or kew promise to operate in that library. Finally I wrote this article as someone who one year ago hesitated to embrace promises. I was simply trying to get a job done. I didn’t want to learn a new API or to chance breaking my code due to misunderstanding promises. Was I ever wrong! When I got off the dime, I easily achieved gratifying results. In this article, I’ve given simplistic examples of a single promise, a promise chain, and a parallel promise of promises. Promises are not hard to use. If I can use them, anybody can. To flesh out the concept, I encourage you to click up the supplied references written by promise experts. Begin at the Promises/A reference, the de-facto standard for JavaScript promises. If you have not directly used promises, give them a try. Resolved: you’ll have a good experience. I promise!Reference: JavaScript Promises Are Cool from our JCG partner Lou Mauget at the Keyhole Software blog....
java-logo

The 10 Most Annoying Things Coming Back to Java After Some Days of Scala

So, I’m experimenting with Scala because I want to write a parser, and the Scala Parsers API seems like a really good fit. After all, I can implement the parser in Scala and wrap it behind a Java interface, so apart from an additional runtime dependency, there shouldn’t be any interoperability issues. After a few days of getting really really used to the awesomeness of Scala syntax, here are the top 10 things I’m missing the most when going back to writing Java:         1. Multiline strings That is my personal favourite, and a really awesome feature that should be in any language. Even PHP has it: Multiline strings. As easy as writing: println ("""Dear reader,If we had this feature in Java, wouldn't that be great?Yours Sincerely, Lukas""") Where is this useful? With SQL, of course! Here’s how you can run a plain SQL statement with jOOQ and Scala: println( DSL.using(configuration) .fetch(""" SELECT a.first_name, a.last_name, b.title FROM author a JOIN book b ON a.id = b.author_id ORDER BY a.id, b.id """) ) And this isn’t only good for static strings. With string interpolation, you can easily inject variables into such strings: val predicate = if (someCondition) "AND a.id = 1" else ""println( DSL.using(configuration) // Observe this little "s" .fetch(s""" SELECT a.first_name, a.last_name, b.title FROM author a JOIN book b ON a.id = b.author_id -- This predicate is the referencing the -- above Scala local variable. Neat! WHERE 1 = 1 $predicate ORDER BY a.id, b.id """) ) That’s pretty awesome, isn’t it? For SQL, there is a lot of potential in Scala.2. Semicolons I sincerely haven’t missed them one bit. The way I structure code (and probably the way most people structure code), Scala seems not to need semicolons at all. In JavaScript, I wouldn’t say the same thing. The interpreted and non-typesafe nature of JavaScript seems to indicate that leaving away optional syntax elements is a guarantee to shoot yourself in the foot. But not with Scala. val a = thisIs.soMuchBetter() val b = no.semiColons() val c = at.theEndOfALine() This is probably due to Scala’s type safety, which would make the compiler complain in one of those rare ambiguous situations, but that’s just an educated guess. 3. Parentheses This is a minefield and leaving away parentheses seems dangerous in many cases. In fact, you can also leave away the dots when calling a method: myObject method myArgument Because of the amount of ambiguities this can generate, especially when chaining more method calls, I think that this technique should be best avoided. But in some situations, it’s just convenient to “forget” about the parens. E.g. val s = myObject.toString 4. Type inference This one is really annoying in Java, and it seems that many other languages have gotten it right, in the meantime. Java only has limited type inference capabilities, and things aren’t as bright as they could be. In Scala, I could simply write: val s = myObject.toString … and not care about the fact that s is of type String. Sometimes, but only sometimes I like to explicitly specify the type of my reference. In that case, I can still do it: val s : String = myObject.toString 5. Case classes I think I’d fancy writing another POJO with 40 attributes, constructors, getters, setters, equals, hashCode, and toString — Said no one. Ever Scala has case classes. Simple immutable pojos written in one-liners. Take the Person case class for instance: case class Person(firstName: String, lastName: String) I do have to write down the attributes once, agreed. But everything else should be automatic. And how do you create an instance of such a case class? Easily, you don’t even need the new operator (in fact, it completely escapes my imagination why new is really needed in the first place): Person("George", "Orwell") That’s it. What else do you want to write to be Enterprise-compliant? Side-note OK, some people will now argue to use project lombok. Annotation-based code generation is nonsense and should be best avoided. In fact, many annotations in the Java ecosystem are simple proof of the fact that the Java language is – and will forever be – very limited in its evolution capabilities. Take @Override for instance. This should be a keyword, not an annotation. You may think it’s a cosmetic difference, but I say that Scala has proven that annotations are pretty much always the wrong tool. Or have you seen heavily annotated Scala code, recently? 6. Methods (functions!) everywhere This one is really one of the most useful features in any language, in my opinion. Why do we always have to link a method to a specific class? Why can’t we simply have methods in any scope level? Because we can, with Scala: // "Top-level", i.e. associated with the package def m1(i : Int) = i + 1object Test {// "Static" method in the Test instance def m2(i : Int) = i + 2 def main(args: Array[String]): Unit = {// Local method in the main method def m3(i : Int) = i + 3 println(m1(1)) println(m2(1)) println(m3(1)) } } Right? Why shouldn’t I be able to define a local method in another method? I can do that with classes in Java: public void method() { class LocalClass {}System.out.println(new LocalClass()); } A local class is an inner class that is local to a method. This is hardly ever useful, but what would be really useful is are local methods. These are also supported in JavaScript or ...
career-logo

The Experience Paradox–How to Get a Job Without Experience

One of the most difficult things about becoming a software developer, is the experience paradox of needing to have a job to get experience and needing to have experience in order to get a job. This problem is of course not relegated to the field of software development, but many new software developers often struggle with getting that first job–especially if they come into software development from another field–most notoriously quality assurance. It definitely seems like it is especially difficult to transition from the role of quality analyst to software developer–even if you can competently write code.     So, how exactly do you get experience when can’t get a job without it?It’s an interesting problem to ponder. There doesn’t seem to be very many good solutions to the problem. Most software developers just assume you have to get lucky in order to get that first job without experience. The other popular alternative it to simply lie about your previous experience until you have enough that you don’t have to. I’m not really a big fan of making up a fake history in order to get a job. It’s pretty likely you’ll get found out and it’s not a great way to start of a relationship with an employer. And, I’m also not that fond of leaving things up to luck either. Counting on luck isn’t a good way to build a successful career. I’d much rather work with things that are directly under my control than rely on chance. So, that brings us back to the question we started with–or rather a modification of it: Without experience, lying or dumb luck, how can I get a job? One of the best ways to gain experience without a job is to create your own job. What’s that you say? Create my own job? Yes, you heard me right. There is basically nothing stopping you from creating your own company, hiring yourself as the only employee and doing real work that will count as valid experience. Now, of course, it’s not just as simple as that. You need to create some real valid experience. You can’t just create a shim company, call yourself the lead software developer and use it to get a job. But, what you can do is to work on creating a few simple apps and do it under the name of a company that you create. There is nothing dishonest or fishy with that approach. Today, it is easier than ever to do, because it is possible to create simple mobile or web application as a solo developer. It is even easy to sell an application you create–although, getting customers might be the hard part. I usually recommend that developer starting out, who are trying to get experience, start off by developing a few mobile applications. The reason I recommend this approach is because mobile applications are generally expected to be fairly small projects and they are easy to sell and distribute. Actually having an application that is being used or sold bring a bit more credibility than just building something for “fun.” But, you don’t have to build an mobile application. Really, you just have to build something useful that is a real application–not just a demo project. This means building something end-to-end. The barrier to entry is extremely low today. Just about anyone can build their own application and distribute it. That means that you can build a real, legit software company all by yourself. With a few applications created, not only will you be able to claim some real valid experience, but you’ll also be able to show the source code for the applications you built at a job interview. You might even find that you will be ahead of some developers who have 4-to-5 years experience, but have never successfully built an application end-to-end. Many software developer start out getting experience maintaining existing systems, but never learn how to actually build a complete application. If you can show some experience building a complete application, even if you are doing it for your own company, you can put yourself way ahead. If one of your applications takes off, you might even find that you don’t need to get a job working for someone else. Your own software company might become successful itself. They key is getting into the interview. You might be thinking that creating your own company and building a few apps is not the same as having a real job and having real experience. I tend to disagree, I tend to think it is actually more valuable and shows a more practical ability, but I realize that some employers and some developers will disagree with me. It doesn’t matter though, because the point of gaining this experience is to get into the job interview. It is very difficult to get a job interview without any experience on your resume, and it is fairly easy to make a job at your own company look just the same as a job from any other company on your resume. Of course, once you get into the interview, you need to be completely honest about the situation. Don’t try and pretend that the company you worked for was anything other than your own creation. Instead, use this information to your advantage. Talk about how you created your own job and took initiative instead of waiting for a job to come to you. Talk about how you learned a great deal by building and distributing your own applications.Turn everything that might be seen as a negative into a positive. Now, this technique of gaining your first experience might not get you a top level software development position, but it should at least help you get your foot in the door–which is arguably the hardest part.Reference: The Experience Paradox–How to Get a Job Without Experience from our JCG partner John Sonmez at the Making the Complex Simple blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close