Enterprise Java

Hibernate Facts: Integration testing strategies

I like Integration Testing, it’s a good way to check what SQL queries Hibernate generates behind-scenes. But Integration Tests require a running database server, and this is the first choice you have to make.

1. Using a production-like local database server for Integration Testing

For a production environment I always prefer using incremental DDL scripts, since I can always know what version is deployed on a given server, and which are the newer scripts required to be deployed. I’ve been relying on Flyway to manage the schema updates for me, and I’m very content about it.

On a small project, where the amount of Integration tests is rather small, you can employ a production-like local database server for testing as well. This is the safest option since it guarantees you’re testing against a very similar environment with the production setup.

The major drawback is tests speed. Using an external database implies an additional timing cost, which may easily get out of control on a large project. After all, who’s fond about running a 60 minutes test routine on a daily basis?

2. In-memory database Integration testing

The reason why I chose to use in-memory databases for Integration Testing is to speed-up my tests running time. This is one aspect affecting tests running time, and there are many others that may affect you, like destroying and recreating a Spring application context, containing a large number of bean dependencies.

There are many in-memory databases you could choose from: HSQLDB, H2, Apache Derby, to name a few.

I’ve been using two in-memory schema generation strategies, both of them having pros and cons, which I am going to explain as follows.

2.1 Making use of hibernate.hbm2ddl.auto=”update”

Hibernate is very flexible when it comes to configuring it. Luckily we can customize the DDL generation using the “hibernate.hbm2ddl.auto” SessionFactory property.

The simplest way to deploy a schema is to use the “update” option. This is useful for testing purposes. I wouldn’t rely on it for a production environment, for which incremental DDL scripts is a better approach.

So, choosing the “update” option is one choice for Integration Testing schema management.

This is how I used it in my Hibernate Facts code examples.

Lets start with the JPA configuration, you can found in the META-INF/persistence.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0"
             xmlns="http://java.sun.com/xml/ns/persistence"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
    <persistence-unit name="testPersistenceUnit" transaction-type="JTA">
        <provider>org.hibernate.ejb.HibernatePersistence</provider>
        <exclude-unlisted-classes>false</exclude-unlisted-classes>

        <properties>
            <property name="hibernate.archive.autodetection"
                      value="class, hbm"/>
            <property name="hibernate.transaction.jta.platform"
                      value="org.hibernate.service.jta.platform.internal.BitronixJtaPlatform" />
            <property name="hibernate.dialect"
                      value="org.hibernate.dialect.HSQLDialect"/>
            <em><property name="hibernate.hbm2ddl.auto"
                      value="update"/></em>
            <property name="hibernate.show_sql"
                      value="true"/>
        </properties>
    </persistence-unit>
</persistence>

And the dataSource configuration looks like:

<bean id="dataSource" class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy">
        <constructor-arg>
            <bean class="bitronix.tm.resource.jdbc.PoolingDataSource" init-method="init"
                  destroy-method="close">
                <property name="className" value="bitronix.tm.resource.jdbc.lrc.LrcXADataSource"/>
                <property name="uniqueName" value="testDataSource"/>
                <property name="minPoolSize" value="0"/>
                <property name="maxPoolSize" value="5"/>
                <property name="allowLocalTransactions" value="true" />
                <property name="driverProperties">
                    <props>
                        <prop key="user">${jdbc.username}</prop>
                        <prop key="password">${jdbc.password}</prop>
                        <prop key="url">${jdbc.url}</prop>
                        <prop key="driverClassName">${jdbc.driverClassName}</prop>
                    </props>
                </property>
            </bean>
        </constructor-arg>
    </bean>

I think Bitronix is one of the most reliable tools I’ve ever worked with. When I was developing JEE applications, I was taking advantage of the Transaction Manager supplied by the Application Server in use. For Spring based projects, I had to employ a stand-alone Transaction Manager, and after evaluating JOTM, Atomikos and Bitronix, I settled for Bitronix. That was 5 years ago, and ever since I’ve deployed several applications, making use of it.

I prefer using XA Transactions even if the application is currently using only one Data Source. I don’t have to worry about any noticeable performance penalty of employing JTA, as Bitronix uses 1PC (One-Phase Commit) when the current Transaction uses only one enlisted Data Source. It also makes possible of adding up to one non-XA Data Source, thanks to the Last Resource Commit optimization.

When using JTA, it’s not advisable to mix XA and Local Transactions, since not all XA Data Sources allow operating inside a Local Transaction, so I tend to avoid this as much as possible.

Unfortunately, as simple as this DDL generation method is, it has one flaw which I am not too fond of. I can’t disable the “allowLocalTransactions” setting, since Hibernate creates the DDL script and updates it outside of a XA Transaction.

Another drawback is that you have little control over what DDL script Hibernate deploys on your behalf, and in this particular context I don’t like compromising flexibility over convenience.

If you don’t use JTA and you don’t need the flexibility of deciding what DDL schema would be deployed on your current database server, then the hibernate.hbm2ddl.auto=”update” is probably your rightful choice.

2.2 Flexible schema deploy

This method consists of two steps. The former is to have Hibernate generating the DDL scripts, and the latter is to deploy them in a customized fashion.

To generate the DDL scripts I have to use the following Ant task (even if being run through Maven), and this is because there is no Hibernate 4 Maven plugin I could use at the time of writing:

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-antrun-plugin</artifactId>
	<executions>
		<execution>
			<id>generate-test-sql-scripts</id>
			<phase>generate-test-resources</phase>
			<goals>
				<goal>run</goal>
			</goals>
			<configuration>
				<tasks>
					<property name="maven_test_classpath" refid="maven.test.classpath"/>
					<path id="hibernate_tools_path">
						<pathelement path="${maven_test_classpath}"/>
					</path>
					<property name="hibernate_tools_classpath" refid="hibernate_tools_path"/>
					<taskdef name="hibernatetool"
							 classname="org.hibernate.tool.ant.HibernateToolTask"/>
					<mkdir dir="${project.build.directory}/test-classes/hsqldb"/>
					<hibernatetool destdir="${project.build.directory}/test-classes/hsqldb">
						<classpath refid="hibernate_tools_path"/>
						<jpaconfiguration persistenceunit="testPersistenceUnit"
										  propertyfile="src/test/resources/META-INF/spring/jdbc.properties"/>
						<hbm2ddl drop="false" create="true" export="false"
								 outputfilename="create_db.sql"
								 delimiter=";" format="true"/>
						<hbm2ddl drop="true" create="false" export="false"
								 outputfilename="drop_db.sql"
								 delimiter=";" format="true"/>
					</hibernatetool>
				</tasks>
			</configuration>
		</execution>
	</executions>
	...
</plugin>

Having the “create” and “drop” DDl scripts, we now have to deploy them when the Spring context starts, and this is done using the following custom Utility class:

public class DatabaseScriptLifecycleHandler implements InitializingBean, DisposableBean {

    private final Resource[] initScripts;
    private final Resource[] destroyScripts;

    private JdbcTemplate jdbcTemplate;

    @Autowired
    private TransactionTemplate transactionTemplate;

    private String sqlScriptEncoding = "UTF-8";
    private String commentPrefix = "--";
    private boolean continueOnError;
    private boolean ignoreFailedDrops;

	public DatabaseScriptLifecycleHandler(DataSource dataSource,
                                          Resource[] initScripts,
                                          Resource[] destroyScripts) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
        this.initScripts = initScripts;
        this.destroyScripts = destroyScripts;
    }

    public void afterPropertiesSet() throws Exception {
        initDatabase();
    }

    public void destroy() throws Exception {
        destroyDatabase();
    }

    public void initDatabase() {
        final ResourceDatabasePopulator resourceDatabasePopulator = createResourceDatabasePopulator();
        transactionTemplate.execute(new TransactionCallback<Void>() {
            @Override
            public Void doInTransaction(TransactionStatus status) {
                jdbcTemplate.execute(new ConnectionCallback<Void>() {
                    @Override
                    public Void doInConnection(Connection con) throws SQLException, DataAccessException {
                        resourceDatabasePopulator.setScripts(getInitScripts());
                        resourceDatabasePopulator.populate(con);
                        return null;
                    }
                });
                return null;
            }
        });
    }

    public void destroyDatabase() {
        final ResourceDatabasePopulator resourceDatabasePopulator = createResourceDatabasePopulator();
        transactionTemplate.execute(new TransactionCallback<Void>() {
            @Override
            public Void doInTransaction(TransactionStatus status) {
                jdbcTemplate.execute(new ConnectionCallback<Void>() {
                    @Override
                    public Void doInConnection(Connection con) throws SQLException, DataAccessException {
                        resourceDatabasePopulator.setScripts(getDestroyScripts());
                        resourceDatabasePopulator.populate(con);
                        return null;
                    }
                });
                return null;
            }
        });
    }

	protected ResourceDatabasePopulator createResourceDatabasePopulator() {
		ResourceDatabasePopulator resourceDatabasePopulator = new ResourceDatabasePopulator();
		resourceDatabasePopulator.setCommentPrefix(getCommentPrefix());
		resourceDatabasePopulator.setContinueOnError(isContinueOnError());
		resourceDatabasePopulator.setIgnoreFailedDrops(isIgnoreFailedDrops());
		resourceDatabasePopulator.setSqlScriptEncoding(getSqlScriptEncoding());
		return resourceDatabasePopulator;
	}
}

which is configured as:

<bean id="databaseScriptLifecycleHandler" class="vladmihalcea.util.DatabaseScriptLifecycleHandler"
	  depends-on="transactionManager">
	<constructor-arg name="dataSource" ref="dataSource"/>
	<constructor-arg name="initScripts">
		<array>
			<bean class="org.springframework.core.io.ClassPathResource">
				<constructor-arg value="hsqldb/create_db.sql"/>
			</bean>
		</array>
	</constructor-arg>
	<constructor-arg name="destroyScripts">
		<array>
			<bean class="org.springframework.core.io.ClassPathResource">
				<constructor-arg value="hsqldb/drop_db.sql"/>
			</bean>
		</array>
	</constructor-arg>
</bean>

This time we can get rid of any local transaction so we can safely set the:

<property name="allowLocalTransactions" value="false" />

 

Vlad Mihalcea

Vlad Mihalcea is a software architect passionate about software integration, high scalability and concurrency challenges.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button