Featured FREE Whitepapers

What's New Here?

software-development-2-logo

The Illusion of Control

Why do we keep trying to attain control? We don’t like uncertainty, and control over a piece of reality relaxes us. A controlled environment is good for our health. In fact, when we are in control we don’t need to do anything. Things work out by themselves, for our satisfaction, because we’re in control. Being in control means never having to decide, because decisions are made for us. Decision making is hard. It’s tiring. If we get to control nirvana, it’s all smooth sailing from there. So we remain with just a tiny problem, because the amount of control in our world is… Slim to None I cannot coerce people to do my willing. I can try to persuade, incentivize, cajole, bribe, threaten. In the end, what they do is their choice. I can prepare big project plans and scream at my team to stick to it. I can give them the best tools to do their work, and they still might fail. As managers, we can’t control our employees. As an organization, we can’t control the market. Even as a person, all I can control are my actions. We want control, because it frees us from making decision. Maybe we should try making decisions easier. Visibility “You can’t have control, but you can have visibility” says my friend Lior Friedman. When we have visibility, the fog of uncertainty fades. Choices are clearer. Visibility is a product of our actions. It is under our control. As managers, we can create a safe environment where issues can appear, and we can do something about them, rather than keeping them secret until the last irresponsible moment. We can be honest with our customers to create trust between us, and improve our relationship with them, which will benefit both of us. If we can’t have control, we should stop put effort into it, and instead, invest in visibility. We’ll still be left with decisions. But life becomes a little less uncertain. That’s good. For everyone, including us.Reference: The Illusion of Control from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
devops-logo

How to use SSH tunneling to get to your restricted servers

Have you ever been told that in your network serverX can only be reached by a serverY via SSH? Now you have access to serverY from your own PC with normal SSH access as well, but just not directly to serverX. What can you do in situation like this if you need to access the restricted serverY? Well you can always ssh into serverY, then ssh again into serverX to check your work or log or whatever. But what happen if you have a database server or WebLogic Server instance running in serverX; and you want your local PC’s fancy tools to access the serverX? (Eg: Accessing the WLS admin console, or using SqlDeveloper to connect to your DB etc). In this case, that’s where ssh tunneling can help you, and here is how.Establish a connection to your serverY that you have access to from your PC. On top of that and at the same time, you will create a tunnel to serverX (your restricted server) by letting serverY redirect all the network traffic data back to your local PC on a specific port. Sounds scary, but it can be done with single command. For example this is how I can access the WLS Admin Console app that was running on server Y. On your own PC, open a terminal and run the following: bash> ssh -L 12345:serverY:7001 serverX Above will prompt you to access serverX with ssh credential. Once logged in, you need to keep the terminal open. Now the tunnel is established and redirecting traffic from port 7001 on serverY to your own PC on port 12345, which is where the WLS admin console is running. Open a browser on your own PC and type in address http://localhost:12345/console Now you should able to access your restricted serverY WLS admin console!Same can be done with a database server such as MySQL. For example, you will run: ssh -L 12346:serverY:3306 serverX and then change your SqlDeveloper JDBC connection url string to the tunnel port: jdbc:mysql://localhost:12346/mydb This is a cool technique to get around a secured environment.Reference: How to use SSH tunneling to get to your restricted servers from our JCG partner Zemian Deng at the A Programmer’s Journal blog....
software-development-2-logo

SBT AutoPlugins Tutorial

This tutorial will guide you through the process of writing your own sbt plugin. There are several reasons to do this and it’s really simple:                  Add customized build steps to your continuous integration process Provide default settings for different environments for various projectsBefore you start, make sure to have sbt installed on your machine and accessible via the commandline. The code is available on github. If you start at the first commit you can just step-through the tutorial with each commit. Setup The first step is to setup our plugin project. There is only one important setting in your build.sbt, the rest is up to you. sbtPlugin := true This will mark you project as a sbt-plugin build. For this tutorial I’m using sbt 0.13.7-M3, which lets you write your build.sbt as you like. No need for separating lines. The complete build.sbt looks like this. name := "awesome-os" organization := "de.mukis"scalaVersion in Global := "2.10.2"sbtPlugin := true// Settings to build a nice looking plugin site site.settings com.typesafe.sbt.SbtSite.SiteKeys.siteMappings <+= (baseDirectory) map { dir => val nojekyll = dir / "src" / "site" / ".nojekyll" nojekyll -> ".nojekyll" } site.sphinxSupport() site.includeScaladoc()// enable github pages ghpages.settings git.remoteRepo := "git@github.com:muuki88/sbt-autoplugins-tutorial.git"// Scripted - sbt plugin tests scriptedSettings scriptedLaunchOpts <+= version apply { v => "-Dproject.version="+v } The plugins used inside this build are configured in the project/plugins.sbt. Nothing special. The Plugin Now we implement a first working version of our plugin and a test project to try it out. What the plugin will actually do is printing out awesome operating systems. Later we will customize this behavior. Let’s take our look at our plugin code. import sbt._ import sbt.Keys.{ streams }/** * This plugin helps you which operating systems are awesome */ object AwesomeOSPlugin extends AutoPlugin {/** * Defines all settings/tasks that get automatically imported, * when the plugin is enabled */ object autoImport { lazy val awesomeOsPrint = TaskKey[Unit]("awesome-os-print", "Prints all awesome operating systems") lazy val awesomeOsList = SettingKey[Seq[String]]("awesome-os-list", "A list of awesome operating systems") }import autoImport._/** * Provide default settings */ override lazy val projectSettings = Seq( awesomeOsList := Seq( "Ubuntu 12.04 LTS","Ubuntu 14.04 LTS","Debian Squeeze", "Fedora 20","CentOS 6", "Android 4.x", "Windows 2000","Windows XP","Windows 7","Windows 8.1", "MacOS Maverick","MacOS Yosemite", "iOS 6","iOS 7" ), awesomeOsPrint := { awesomeOsList.value foreach (os => streams.value.log.info(os)) } )} And that’s it. We define two keys. AwesomeOsList is a SettingKey, which means it’s set upfront and will only change if some explicitly sets it to another value or change it,e.g. awesomeOsList += "Solaris" awesomeOsPrint is a task, which means it gets executed each time you call it. Test Project Let’s try the plugin out. For this we create a test project which has a plugin dependency on our awesome os plugin. We create a test-project directory at the root directory of our plugin project. Inside test-project we add a build.sbt with the following contents: name := "test-project" version := "1.0"// enable our now plugin enablePlugins(AwesomeOSPlugin) However the real trick is done inside the test-project/project/plugins.sbt. We create a reference to a project in the parent directory: // build root project lazy val root = Project("plugins", file(".")) dependsOn(awesomeOS)// depends on the awesomeOS project lazy val awesomeOS = file("..").getAbsoluteFile.toURI And that’s all. Run sbt inside the test-project and print out awesome operating systems. sbt awesomeOsPrint If you change something in your plugin code just call reload and your test-project will recompile the changes. Add a new task and test it Next, we add a task which stores the awesomeOsList inside a file. This is something we can automatically test. Testing sbt-plugins is a bit tedious, but doable with the scripted-plugin. First we create a folder inside src/sbt-test. The directories inside sbt-test can be seen as categories where you put your tests into. I created a global folder where I put two test projects. The critical configuration is again inside the project/plugins.sbt addSbtPlugin("de.mukis" % "awesome-os" % sys.props("project.version")) The scripted plugin first plublishes the plugin locally and then passes the version number to each started sbt test build via the system property project.version. We added this behaviour in our build.sbt earlier: scriptedLaunchOpts <+= version apply { v => "-Dproject.version="+v } Each test project contains a file called test, which can contain sbt commands and some simple check commands. Normally you put in some simple checks like file exists and do the more sophisticated stuff inside a task defined in the test project. The test file for our second test looks like this. # Create the another-os.txt file > awesomeOsStore $ exists target/another-os.txt > check-os-list The check-os-list task is defined inside the build.sbt of the test project (/src/sbt-test/global/store-custom-oslist/build.sbt. enablePlugins(AwesomeOSPlugin)name := "simple-test"version := "0.1.0"awesomeOsFileName := "another-os.txt"// this is the scripted test TaskKey[Unit]("check-os-list") := { val list = IO.read(target.value / awesomeOsFileName.value) assert(list contains "Ubuntu", "Ubuntu not present in awesome operating systems: " + list) } Separate operating systems per plugin Our next goal is to customize the operating system list, so users may choose what systems they like most. We do this by generating a configuration scope for each operating system category and a plugin that configures the settings in this scope. In a real-world plugin you can use this to define different actions in different environments. E.g. develope, staging or production. This is a very crucial point of autoplugins as it allows you to enable specific plugins to get a different build flavor and/or create different scopes which are configured by different plugins. The first step is to create three new autoplugins: AwesomeWindowsPlugin,AwesomeMacPlugin and AwesomeLinuxPlugin. They will all work in the same fashion:Scope the projectSettings from AwesomeOSPlugin to there custom defined configuration scope and provide them as settings Override specific settings/tasks inside the custom defined configuration scopeThe AwesomeLinuxPlugin looks like this: import sbt._object AwesomeLinuxPlugin extends AutoPlugin{object autoImport { /** Custom configuration scope */ lazy val Linux = config("awesomeLinux") }import AwesomeOSPlugin.autoImport._ import autoImport._/** This plugin requires the AwesomeOSPlugin to be enabled */ override def requires = AwesomeOSPlugin/** If all requirements are met, this plugin will automatically get enabled */ override def trigger = allRequirements/** * 1. Use the AwesomeOSPlugin settings as default and scope them to Linux * 2. Override the default settings inside the Linux scope */ override lazy val projectSettings = inConfig(Linux)(AwesomeOSPlugin.projectSettings) ++ settings/** * the linux specific settings */ private lazy val settings: Seq[Setting[_]] = Seq( awesomeOsList in Linux := Seq( "Ubuntu 12.04 LTS", "Ubuntu 14.04 LTS", "Debian Squeeze", "Fedora 20", "CentOS 6", "Android 4.x"), // add awesome os to the general list awesomeOsList ++= (awesomeOsList in Linux).value ) } The other plugins are defined in the same way. Let’s try things out. Start sbt in your test-project. sbt awesomeOsPrint # will print all operating systems awesomeWindows:awesomeOsPrint # will only print awesome windows os awesomeMac:awesomeOsPrint # only mac awesomeLinux:awesomeOsPrint # only linux SBT already provides some scopes like Compile, Test, etc. So there’s only a small need for creating your very own scopes. Most of the time you will use the already provided and customize these in your plugins. One more note. You may wonder why the plugins are all getting enabled and we didn’t have to change anything in the test-project. That’s another benefit from autoplugins. You can specify requires, which define dependencies between plugins and triggers that specify when your plugin should be enabled. // what is required that this plugin can be enabled <span class="k">override</span> <span class="k">def</span> <span class="nf">requires</span> <span class="o">=</span> <span class="n">AwesomeOSPlugin </span> // when should this plugin be enabled <span class="k">override</span> <span class="k">def</span> <span class="nf">trigger</span> <span class="o">=</span> allRequirements The user of your plugin now doesn’t have to care about the order he puts the plugins in his build.sbt, because the developer defines the requirements upfront and sbt will try to fulfill them. Conclusion SBT Autoplugins make the life of plugin users and developers a lot easier. It lowers the steep learning curve for sbt a bit and creates more readable buildfiles. For sbt-plugin developers the process of migrating isn’t very difficult. Replacing sbt.Plugin with sbt.AutoPlugin and creating an autoImport field.Reference: SBT AutoPlugins Tutorial from our JCG partner Nepomuk Seiler at the mukis.de blog....
spring-interview-questions-answers

Spring from the Trenches: Resetting Auto Increment Columns Before Each Test Method

When we are writing integration tests for a function that saves information to the database, we have to verify that the correct information is saved to the database. If our application uses Spring Framework, we can use Spring Test DbUnit and DbUnit for this purpose. However, it is very hard to verify that the correct value is inserted into the primary key column, because primary keys are typically generated automatically by using either auto increment or a sequence. This blog post identifies the problem related to the columns whose values are generated automatically and helps us to solve it. Additional Reading:The tested application is described on a blog post titled: Spring from the Trenches: Using Null Values in DbUnit Datasets. I recommend that you read that blog post because I am not going to repeat its content on this blog post. If you don’t know how you can write integration tests for your repositories, you should read my blog post titled: Spring Data JPA Tutorial: Integration Testing. It explains how you can write integration tests for Spring Data JPA repositories, but you can same the approach for writing tests for other Spring powered repositories that use a relational database.We Cannot Assert the Unknown Let’s start by writing two integration tests for the save() method of the CrudRepository interface. These tests are described in the following:The first test ensures that the correct information is saved to the database when the title and the description of the saved Todo object are set. The second test verifies that the correct information is saved to the database when only the title of the saved Todo object is set.Both tests initialize the used database by using the same DbUnit dataset (no-todo-entries.xml) which looks as follows: <dataset> <todos/> </dataset> The source code of our integration test class looks as follows: import com.github.springtestdbunit.DbUnitTestExecutionListener; import com.github.springtestdbunit.annotation.DatabaseSetup; import com.github.springtestdbunit.annotation.DbUnitConfiguration; import com.github.springtestdbunit.annotation.ExpectedDatabase; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.ApplicationContext; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.TestExecutionListeners; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.support.DependencyInjectionTestExecutionListener; import org.springframework.test.context.support.DirtiesContextTestExecutionListener; import org.springframework.test.context.transaction.TransactionalTestExecutionListener;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = {PersistenceContext.class}) @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionalTestExecutionListener.class, DbUnitTestExecutionListener.class }) @DbUnitConfiguration(dataSetLoader = ColumnSensingReplacementDataSetLoader.class) public class ITTodoRepositoryTest {private static final Long ID = 2L; private static final String DESCRIPTION = "description"; private static final String TITLE = "title"; private static final long VERSION = 0L;@Autowired private TodoRepository repository;@Test @DatabaseSetup("no-todo-entries.xml") @ExpectedDatabase("save-todo-entry-with-title-and-description-expected.xml") public void save_WithTitleAndDescription_ShouldSaveTodoEntryToDatabase() { Todo todoEntry = Todo.getBuilder() .title(TITLE) .description(DESCRIPTION) .build();repository.save(todoEntry); }@Test @DatabaseSetup("no-todo-entries.xml") @ExpectedDatabase("save-todo-entry-without-description-expected.xml") public void save_WithoutDescription_ShouldSaveTodoEntryToDatabase() { Todo todoEntry = Todo.getBuilder() .title(TITLE) .description(null) .build();repository.save(todoEntry); } } These are not very good integration tests because they only test that Spring Data JPA and Hibernate are working correctly. We shouldn’t waste our time by writing tests for frameworks. If we don’t trust a framework, we shouldn’t use it. If you want to learn to write good integration tests for your data access code, you should read my tutorial titled: Writing Tests for Data Access Code. The DbUnit dataset (save-todo-entry-with-title-and-description-expected.xml), which is used to verify that the title and the description of the saved Todo object are inserted into the todos table, looks as follows: <dataset> <todos id="1" description="description" title="title" version="0"/> </dataset> The DbUnit dataset (save-todo-entry-without-description-expected.xml), which is used to verify that only the title of the saved Todo object is inserted the todos table, looks as follows: <dataset> <todos id="1" description="[null]" title="title" version="0"/> </dataset> When we run our integration tests, one of them fails and we see the following error message: junit.framework.ComparisonFailure: value (table=todos, row=0, col=id) Expected :1 Actual :2 The reason for this is that the id column of the todos table is an auto increment column, and the integration test that is invoked first “gets” the id 1. When the second integration test is invoked, the value 2 is saved to the id column and the test fails. Let’s find out how we can solve this problem. Fast Fixes for the Win? There are two fast fixes to our problem. These fixes are described in the following: First, we could annotate the test class with the @DirtiesContext annotation and set the value of its classMode attribute to DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD This would fix our problem because our application creates a new in-memory database when its application context is loaded, and the @DirtiesContext annotation ensures that each test method uses a new application context. The configuration of our test class looks as follows: import com.github.springtestdbunit.DbUnitTestExecutionListener; import com.github.springtestdbunit.annotation.DatabaseSetup; import com.github.springtestdbunit.annotation.DbUnitConfiguration; import com.github.springtestdbunit.annotation.ExpectedDatabase; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.annotation.DirtiesContext; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.TestExecutionListeners; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.support.DependencyInjectionTestExecutionListener; import org.springframework.test.context.support.DirtiesContextTestExecutionListener; import org.springframework.test.context.transaction.TransactionalTestExecutionListener;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = {PersistenceContext.class}) @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionalTestExecutionListener.class, DbUnitTestExecutionListener.class }) @DbUnitConfiguration(dataSetLoader = ColumnSensingReplacementDataSetLoader.class) @DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD) public class ITTodoRepositoryTest {} This looks clean but unfortunately it can destroy the performance of our integration test suite because it creates a new application context before each test method is invoked. That is why we should not use the @DirtiesContext annotation unless it is ABSOLUTELY NECESSARY. However, if our application has only a small number of integration tests, the performance penalty caused by the @DirtiesContext annotation might be tolerable. We shouldn’t abandon this solution just because it makes our tests slower. Sometimes this is acceptable, and if this is the case, using the @DirtiesContext annotation is a good solution. Additional Reading:The Javadoc of the @DirtiesContext annotation The Javadoc of the @DirtiesContext.ClassMode enumSecond, we could omit the id attribute of the todos element from our datasets, and set the value of @ExpectedDatabase annotation’s assertionMode attribute to DatabaseAssertionMode.NON_STRICT. This would fix our problem because the DatabaseAssertionMode.NON_STRICT means that the columns and the tables that are not present in our dataset file are ignored. This assertion mode is a useful tool because it gives us the possibility to ignore tables whose information is not changed by the tested code. However, the DatabaseAssertionMode.NON_STRICT is not the correct tool for solving this particular problem because it forces us to write datasets that verify too few things. For example, we cannot use the following dataset: <dataset> <todos id="1" description="description" title="title" version="0"/> <todos description="description two" title="title two" version="0"/> </dataset> If we use the DatabaseAssertionMode.NON_STRICT, the every “row” of our dataset must specify the same columns. In other words, we have to modify our dataset to look like this: <dataset> <todos description="description" title="title" version="0"/> <todos description="description two" title="title two" version="0"/> </dataset> This is not a big deal because we can trust that Hibernate inserts the correct id into the id column of the todos table. However, if each todo entry could have 0..* tags, we would be in trouble. Let’s assume that we have to write an integration test that inserts two new todo entries to the database and create a DbUnit dataset which ensures thatThe todo entry titled: ‘title one’ has a tag called: ‘tag one’ The todo entry titled: ‘title two’ has a tag called: ‘tag two’Our best effort looks as follows: <dataset> <todos description="description" title="title one" version="0"/> <todos description="description two" title="title two" version="0"/> <tags name="tag one" version="0"/> <tags name="tag two" version="0"/> </dataset> We cannot create a useful DbUnit dataset because we don’t know the ids of the todo entries that are saved to the database. We have to find a better solution. Finding a Better Solution We have already found two different solutions for our problem, but both of them create new problems. There is a third solution that is based on the following idea: If we don’t know the next value that is inserted into an auto increment column, we have to reset the auto increment column before each test method is invoked. We can do this by following these steps:Create a class that is used to reset the auto increment columns of the specified database tables. Fix our integration tests.Let’s get our hands dirty. Creating the Class that Can Reset Auto-Increment Columns We can create the class, which can reset the auto increments columns of the specified database tables, by following these steps:Create a final class called DbTestUtil and prevent its instantiation by adding a private constructor to it. Add a public static void resetAutoIncrementColumns() method to the DbTestUtil class. This method takes two method parameters:The ApplicationContext object contains the configuration of the tested application. The names of the database tables whose auto increment columns must be reseted.Implement this method by following these steps:Get a reference to the DataSource object. Read the SQL template from the properties file (application.properties) by using the key ‘test.reset.sql.template’. Open a database connection. Create the invoked SQL statements and invoke them.The source code of the DbTestUtil class looks as follows: import org.springframework.context.ApplicationContext; import org.springframework.core.env.Environment;import javax.sql.DataSource; import java.sql.Connection; import java.sql.SQLException; import java.sql.Statement;public final class DbTestUtil {private DbTestUtil() {}public static void resetAutoIncrementColumns(ApplicationContext applicationContext, String... tableNames) throws SQLException { DataSource dataSource = applicationContext.getBean(DataSource.class); String resetSqlTemplate = getResetSqlTemplate(applicationContext); try (Connection dbConnection = dataSource.getConnection()) { //Create SQL statements that reset the auto increment columns and invoke //the created SQL statements. for (String resetSqlArgument: tableNames) { try (Statement statement = dbConnection.createStatement()) { String resetSql = String.format(resetSqlTemplate, resetSqlArgument); statement.execute(resetSql); } } } }private static String getResetSqlTemplate(ApplicationContext applicationContext) { //Read the SQL template from the properties file Environment environment = applicationContext.getBean(Environment.class); return environment.getRequiredProperty("test.reset.sql.template"); } } Additional Information:The Javadoc of the ApplicationContext interface The Javadoc of the DataSource interface The Javadoc of the Environment interface The Javadoc of the String.format() methodLet’s move on and find out how we can use this class in our integration tests. Fixing Our Integration Tests We can fix our integration tests by following these steps:Add the reset SQL template to the properties file of our example application. Reset the auto increment column (id) of the todos table before our test methods are invoked.First, we have to add the reset SQL template to the properties file of our example application. This template must use the format that is supported by the format() method of the String class. Because our example application uses the H2 in-memory database, we have to add the following SQL template to our properties file: test.reset.sql.template=ALTER TABLE %s ALTER COLUMN id RESTART WITH 1 Additional Information:The application context configuration class of our example application The Javadoc of the String.format() method Resetting Auto Increment in H2 How To Reset MySQL Autoincrement Column PostgreSQL 9.3 Documentation: ALTER SEQUENCESecond, we have to reset the auto increment column (id) of the todos table before our test methods are invoked. We can do this by making the following changes to the ITTodoRepositoryTest class:Inject the ApplicationContext object, which contains the configuration of our example application, into the test class. Reset the auto increment column of the todos table.The source code of our fixed integration test class looks as follows (the changes are highlighted): import com.github.springtestdbunit.DbUnitTestExecutionListener; import com.github.springtestdbunit.annotation.DatabaseSetup; import com.github.springtestdbunit.annotation.DbUnitConfiguration; import com.github.springtestdbunit.annotation.ExpectedDatabase; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.ApplicationContext; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.TestExecutionListeners; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.support.DependencyInjectionTestExecutionListener; import org.springframework.test.context.support.DirtiesContextTestExecutionListener; import org.springframework.test.context.transaction.TransactionalTestExecutionListener;import java.sql.SQLException;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = {PersistenceContext.class}) @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionalTestExecutionListener.class, DbUnitTestExecutionListener.class }) @DbUnitConfiguration(dataSetLoader = ColumnSensingReplacementDataSetLoader.class) public class ITTodoRepositoryTest {private static final Long ID = 2L; private static final String DESCRIPTION = "description"; private static final String TITLE = "title"; private static final long VERSION = 0L;@Autowired private ApplicationContext applicationContext;@Autowired private TodoRepository repository;@Before public void setUp() throws SQLException { DbTestUtil.resetAutoIncrementColumns(applicationContext, "todos"); }@Test @DatabaseSetup("no-todo-entries.xml") @ExpectedDatabase("save-todo-entry-with-title-and-description-expected.xml") public void save_WithTitleAndDescription_ShouldSaveTodoEntryToDatabase() { Todo todoEntry = Todo.getBuilder() .title(TITLE) .description(DESCRIPTION) .build();repository.save(todoEntry); }@Test @DatabaseSetup("no-todo-entries.xml") @ExpectedDatabase("save-todo-entry-without-description-expected.xml") public void save_WithoutDescription_ShouldSaveTodoEntryToDatabase() { Todo todoEntry = Todo.getBuilder() .title(TITLE) .description(null) .build();repository.save(todoEntry); } } Additional Information:The Javadoc of the @Autowired annotation The Javadoc of the ApplicationContext interface The Javadoc of the @Before annotationWhen we run our integration tests for the second time, they pass. Let’s move on and summarize what we learned from this blog post. Summary This blog has taught us three things:We cannot write useful integration tests if we don’t know the values that are inserted into columns whose values are generated automatically. Using the @DirtiesContext annotation might be a good choice if our application doesn’t have many integration tests. If our application has a lot of integration tests, we have to reset the auto increment columns before each test method is invoked.You can get the example application of this blog post from Github.Reference: Spring from the Trenches: Resetting Auto Increment Columns Before Each Test Method from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
jboss-hibernate-logo

Hibernate collections optimistic locking

Introduction Hibernate provides an optimistic locking mechanism to prevent lost updates even for long-conversations. In conjunction with an entity storage, spanning over multiple user requests (extended persistence context or detached entities) Hibernate can guarantee application-level repeatable-reads. The dirty checking mechanism detects entity state changes and increments the entity version. While basic property changes are always taken into consideration, Hibernate collections are more subtle in this regard.   Owned vs Inverse collections In relational databases, two records are associated through a foreign key reference. In this relationship, the referenced record is the parent while the referencing row (the foreign key side) is the child. A non-null foreign key may only reference an existing parent record. In the Object-oriented space this association can be represented in both directions. We can have a many-to-one reference from a child to parent and the parent can also have a one-to-many children collection. Because both sides could potentially control the database foreign key state, we must ensure that only one side is the owner of this association. Only the owning side state changes are propagated to the database. The non-owning side has been traditionally referred as the inverse side. Next I’ll describe the most common ways of modelling this association. The unidirectional parent-owning-side-child association mapping Only the parent side has a @OneToMany non-inverse children collection. The child entity doesn’t reference the parent entity at all. @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) private List<Comment> comments = new ArrayList<Comment>(); ... } The unidirectional parent-owning-side-child component association mapping mapping The child side doesn’t always have to be an entity and we might model it as a component type instead. An Embeddable object (component type) may contain both basic types and association mappings but it can never contain an @Id. The Embeddable object is persisted/removed along with its owning entity. The parent has an @ElementCollection children association. The child entity may only reference the parent through the non-queryable Hibernate specific @Parent annotation. @Entity(name = "post") public class Post { ... @ElementCollection @JoinTable(name = "post_comments", joinColumns = @JoinColumn(name = "post_id")) @OrderColumn(name = "comment_index") private List<Comment> comments = new ArrayList<Comment>(); ...public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } }@Embeddable public class Comment { ... @Parent private Post post; ... } The bidirectional parent-owning-side-child association mapping The parent is the owning side so it has a @OneToMany non-inverse (without a mappedBy directive) children collection. The child entity references the parent entity through a @ManyToOne association that’s neither insertable nor updatable: @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) private List<Comment> comments = new ArrayList<Comment>(); ...public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } }@Entity(name = "comment") public class Comment { ... @ManyToOne @JoinColumn(name = "post_id", insertable = false, updatable = false) private Post post; ... } The bidirectional child-owning-side-parent association mapping The child entity references the parent entity through a @ManyToOne association, and the parent has a mappedBy @OneToMany children collection. The parent side is the inverse side so only the @ManyToOne state changes are propagated to the database. Even if there’s only one owning side, it’s always a good practice to keep both sides in sync by using the add/removeChild() methods. @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true, mappedBy = "post") private List<Comment> comments = new ArrayList<Comment>(); ...public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } }@Entity(name = "comment") public class Comment { ... @ManyToOne private Post post; ... } The unidirectional child-owning-side-parent association mapping The child entity references the parent through a @ManyToOne association. The parent doesn’t have a @OneToMany children collection so the child entity becomes the owning side. This association mapping resembles the relational data foreign key linkage. @Entity(name = "comment") public class Comment { ... @ManyToOne private Post post; ... } Collection versioning The 3.4.2 section of the JPA 2.1 specification defines optimistic locking as: The version attribute is updated by the persistence provider runtime when the object is written to the database. All non-relationship fields and proper ties and all relationships owned by the entity are included in version checks[35]. [35] This includes owned relationships maintained in join tables N.B. Only owning-side children collection can update the parent version. Testing time Let’s test how the parent-child association type affects the parent versioning. Because we are interested in the children collection dirty checking, the unidirectional child-owning-side-parent association is going to be skipped, as in that case the parent doesn’t contain a children collection. Test case The following test case is going to be used for all collection type use cases: protected void simulateConcurrentTransactions(final boolean shouldIncrementParentVersion) { final ExecutorService executorService = Executors.newSingleThreadExecutor();doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { try { P post = postClass.newInstance(); post.setId(1L); post.setName("Hibernate training"); session.persist(post); return null; } catch (Exception e) { throw new IllegalArgumentException(e); } } });doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(final Session session) { final P post = (P) session.get(postClass, 1L); try { executorService.submit(new Callable<Void>() { @Override public Void call() throws Exception { return doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session _session) { try { P otherThreadPost = (P) _session.get(postClass, 1L); int loadTimeVersion = otherThreadPost.getVersion(); assertNotSame(post, otherThreadPost); assertEquals(0L, otherThreadPost.getVersion()); C comment = commentClass.newInstance(); comment.setReview("Good post!"); otherThreadPost.addComment(comment); _session.flush(); if (shouldIncrementParentVersion) { assertEquals(otherThreadPost.getVersion(), loadTimeVersion + 1); } else { assertEquals(otherThreadPost.getVersion(), loadTimeVersion); } return null; } catch (Exception e) { throw new IllegalArgumentException(e); } } }); } }).get(); } catch (Exception e) { throw new IllegalArgumentException(e); } post.setName("Hibernate Master Class"); session.flush(); return null; } }); } The unidirectional parent-owning-side-child association testing #create tables Query:{[create table comment (id bigint generated by default as identity (start with 1), review varchar(255), primary key (id))][]} Query:{[create table post (id bigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null, comment_index integer not null, primary key (post_id, comment_index))][]} Query:{[alter table post_comment add constraint UK_se9l149iyyao6va95afioxsrl unique (comments_id)][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]}#insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]}#select post in secondary transaction Query:{[select entityopti0_.id as id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]}#insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[update post set name=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comment (post_id, comment_index, comments_id) values (?, ?, ?)][1,0,1]}#optimistic locking exception in primary transaction Query:{[update post set name=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnUnidirectionalCollectionTest$Post#1] The unidirectional parent-owning-side-child component association testing #create tables Query:{[create table post (id bigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comments (post_id bigint not null, review varchar(255), comment_index integer not null, primary key (post_id, comment_index))][]} Query:{[alter table post_comments add constraint FK_gh9apqeduab8cs0ohcq1dgukp foreign key (post_id) references post][]}#insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]}#select post in secondary transaction Query:{[select entityopti0_.id as id1_0_0_, entityopti0_.name as name2_0_0_, entityopti0_.version as version3_0_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[select comments0_.post_id as post_id1_0_0_, comments0_.review as review2_1_0_, comments0_.comment_index as comment_3_0_ from post_comments comments0_ where comments0_.post_id=?][1]}#insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[update post set name=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comments (post_id, comment_index, review) values (?, ?, ?)][1,0,Good post!]}#optimistic locking exception in primary transaction Query:{[update post set name=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnComponentCollectionTest$Post#1] The bidirectional parent-owning-side-child association testing #create tables Query:{[create table comment (id bigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (id bigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null)][]} Query:{[alter table post_comment add constraint UK_se9l149iyyao6va95afioxsrl unique (comments_id)][]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]}#insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]}#select post in secondary transaction Query:{[select entityopti0_.id as id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[select comments0_.post_id as post_id1_1_0_, comments0_.comments_id as comments2_2_0_, entityopti1_.id as id1_0_1_, entityopti1_.post_id as post_id3_0_1_, entityopti1_.review as review2_0_1_, entityopti2_.id as id1_1_2_, entityopti2_.name as name2_1_2_, entityopti2_.version as version3_1_2_ from post_comment comments0_ inner join comment entityopti1_ on comments0_.comments_id=entityopti1_.id left outer join post entityopti2_ on entityopti1_.post_id=entityopti2_.id where comments0_.post_id=?][1]}#insert comment in secondary transaction #optimistic locking post version update in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[update post set name=?, version=? where id=? and version=?][Hibernate training,1,1,0]} Query:{[insert into post_comment (post_id, comments_id) values (?, ?)][1,1]}#optimistic locking exception in primary transaction Query:{[update post set name=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency.EntityOptimisticLockingOnBidirectionalParentOwningCollectionTest$Post#1] The bidirectional child-owning-side-parent association testing #create tables Query:{[create table comment (id bigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (id bigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]}#insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]}#select post in secondary transaction Query:{[select entityopti0_.id as id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]}#insert comment in secondary transaction #post version is not incremented in secondary transaction Query:{[insert into comment (id, post_id, review) values (default, ?, ?)][1,Good post!]} Query:{[select count(id) from comment where post_id =?][1]}#update works in primary transaction Query:{[update post set name=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} Overruling default collection versioning If the default owning-side collection versioning is not suitable for your use case, you can always overrule it with Hibernate @OptimisticLock annotation. Let’s overrule the default parent version update mechanism for bidirectional parent-owning-side-child association: @Entity(name = "post") public class Post { ... @OneToMany(cascade = CascadeType.ALL, orphanRemoval = true) @OptimisticLock(excluded = true) private List<Comment> comments = new ArrayList<Comment>(); ...public void addComment(Comment comment) { comment.setPost(this); comments.add(comment); } }@Entity(name = "comment") public class Comment { ... @ManyToOne @JoinColumn(name = "post_id", insertable = false, updatable = false) private Post post; ... } This time, the children collection changes won’t trigger a parent version update: #create tables Query:{[create table comment (id bigint generated by default as identity (start with 1), review varchar(255), post_id bigint, primary key (id))][]} Query:{[create table post (id bigint not null, name varchar(255), version integer not null, primary key (id))][]} Query:{[create table post_comment (post_id bigint not null, comments_id bigint not null)][]} Query:{[alter table post_comment add constraint UK_se9l149iyyao6va95afioxsrl unique (comments_id)][]} Query:{[alter table comment add constraint FK_f1sl0xkd2lucs7bve3ktt3tu5 foreign key (post_id) references post][]} Query:{[alter table post_comment add constraint FK_se9l149iyyao6va95afioxsrl foreign key (comments_id) references comment][]} Query:{[alter table post_comment add constraint FK_6o1igdm04v78cwqre59or1yj1 foreign key (post_id) references post][]}#insert post in primary transaction Query:{[insert into post (name, version, id) values (?, ?, ?)][Hibernate training,0,1]}#select post in secondary transaction Query:{[select entityopti0_.id as id1_1_0_, entityopti0_.name as name2_1_0_, entityopti0_.version as version3_1_0_ from post entityopti0_ where entityopti0_.id=?][1]} Query:{[select comments0_.post_id as post_id1_1_0_, comments0_.comments_id as comments2_2_0_, entityopti1_.id as id1_0_1_, entityopti1_.post_id as post_id3_0_1_, entityopti1_.review as review2_0_1_, entityopti2_.id as id1_1_2_, entityopti2_.name as name2_1_2_, entityopti2_.version as version3_1_2_ from post_comment comments0_ inner join comment entityopti1_ on comments0_.comments_id=entityopti1_.id left outer join post entityopti2_ on entityopti1_.post_id=entityopti2_.id where comments0_.post_id=?][1]}#insert comment in secondary transaction Query:{[insert into comment (id, review) values (default, ?)][Good post!]} Query:{[insert into post_comment (post_id, comments_id) values (?, ?)][1,1]}#update works in primary transaction Query:{[update post set name=?, version=? where id=? and version=?][Hibernate Master Class,1,1,0]} Conclusion It’s very important to understand how various modelling structures impact concurrency patterns. The owning-side collections changes are taken into consideration when incrementing the parent version number, and you can always bypass it using the @OptimisticLock annotation.Code available on GitHub.Reference: Hibernate collections optimistic locking from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
spring-interview-questions-answers

Spring Caching abstraction and Google Guava Cache

Spring provides a great out of the box support for caching expensive method calls. The caching abstraction is covered in a great detail here. My objective here is to cover one of the newer cache implementations that Spring now provides with 4.0+ version of the framework – using Google Guava Cache In brief, consider a service which has a few slow methods:         public class DummyBookService implements BookService {@Override public Book loadBook(String isbn) { // Slow method 1.}@Override public List<Book> loadBookByAuthor(String author) { // Slow method 2 }} With Spring Caching abstraction, repeated calls with the same parameter can be sped up by an annotation on the method along these lines – here the result of loadBook is being cached in to a “book” cache and listing of books cached into another “books” cache: public class DummyBookService implements BookService {@Override @Cacheable("book") public Book loadBook(String isbn) { // slow response time..}@Override @Cacheable("books") public List<Book> loadBookByAuthor(String author) { // Slow listing } } Now, Caching abstraction support requires a CacheManager to be available which is responsible for managing the underlying caches to store the cached results, with the new Guava Cache support the CacheManager is along these lines: @Bean public CacheManager cacheManager() { return new GuavaCacheManager("books", "book"); } Google Guava Cache provides a rich API to be able to pre-load the cache, set eviction duration based on last access or created time, set the size of the cache etc, if the cache is to be customized then a guava CacheBuilder can be passed to the CacheManager for this customization: @Bean public CacheManager cacheManager() { GuavaCacheManager guavaCacheManager = new GuavaCacheManager(); guavaCacheManager.setCacheBuilder(CacheBuilder.newBuilder().expireAfterAccess(30, TimeUnit.MINUTES)); return guavaCacheManager; } This works well if all the caches have a similar configuration, what if the caches need to be configured differently – for eg. in the sample above, I may want the “book” cache to never expire but the “books” cache to have an expiration of 30 mins, then the GuavaCacheManager abstraction does not work well, instead a better solution is actually to use a SimpleCacheManager which provides a more direct way to get to the cache and can be configured this way: @Bean public CacheManager cacheManager() { SimpleCacheManager simpleCacheManager = new SimpleCacheManager(); GuavaCache cache1 = new GuavaCache("book", CacheBuilder.newBuilder().build()); GuavaCache cache2 = new GuavaCache("books", CacheBuilder.newBuilder() .expireAfterAccess(30, TimeUnit.MINUTES) .build()); simpleCacheManager.setCaches(Arrays.asList(cache1, cache2)); return simpleCacheManager; } This approach works very nicely, if required certain caches can be configured to be backed by a different caching engines itself, say a simple hashmap, some by Guava or EhCache some by distributed caches like Gemfire.Reference: Spring Caching abstraction and Google Guava Cache from our JCG partner Biju Kunjummen at the all and sundry blog....
java-interview-questions-answers

Tomcat to Wildfly: Configuring Database connectivity

This excerpt has been taken from the “From Tomcat to WildFly” book in which you’ll learn how to port your existing Tomcat architectures to WildFly, including both the server configuration and the applications running on the top of it. WildFly is a fully compliant Java Enterprise Edition 7 container with a much wider set of available services and options compared to Tomcat. The book will also give you exposure to the most common pitfalls and drawbacks, which might happen during the migration.  Table Of Contents1. Introduction 2. Installing the JDBC Driver as module 3. Registering the JDBC driver on the application server 4. Configuring a datasource which uses the JDBC driver 5. Configuration output 6. Porting Datasource parameters to WildFly6.1. Minimum and Maximum pool size 6.2. Dealing with Idle connections 6.3. Setting a timeout when acquiring connections 6.4. Handling Connection leaks 6.5. Configuring Statement Cache1. IntroductionProblem: I have a Datasource configuration on Tomcat, which is used to collect database connections from a pool. I need to port my configuration on WildFly.On Apache Tomcat, the datasource configuration can be included in the global section of your server.xml file. Here is for example a configuration for the popular MySQL database:<Resource name="jdbc/mysqlds" auth="Container" type="javax.sql.DataSource" maxActive="100" maxIdle="30" maxWait="10000" username="tomcat" password="tomcat" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/mydatabase"/> In addition, the following lines should be placed on the WEB-INF/web.xml for the application-specific content.<web-app xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd" version="2.4"> <description>Tomcat DB</description> <resource-ref> <description>Database Connection</description> <res-ref-name>jdbc/mysqlds</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> </web-app> In order to be able to load the MySQL JDBC driver, you need to include the JAR library in the CATALINA_HOME/lib folder:Let’s see how to configure a Datasource configuration in WildFly. This can be accomplished in several ways, yet all possible solutions involve the following steps:Installing the JDBC driver as module Registering the JDBC driver on the application server Configuring a datasource which uses the JDBC driverWe will see the recommended approach, which requires using the Command Line Interface, although we will mention some alternatives available. 2. Installing the JDBC Driver as module WildFly is based on the assumption that every library is itself a module. So, we will turn at first the JDBC driver into a module. This can be done by creating a file path structure under the JBOSS_HOME/modules directory. For example, in order to install a MySQL JDBC driver, create a directory structure as follows: JBOSS_HOME/modules/com/mysql/main.Copy the JDBC driver JAR into the main subdirectory. In the main subdirectory, create a module.xml file containing the following definitions (just adapt the JDBC driver name to your case):<?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.0" name="com.mysql"><resources> <resource-root path="mysql-connector-java-5.1.24-bin.jar"/> </resources><dependencies> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies></module> The above procedure can be completed as well using the Command Line Interface, with a single command, which will create the file system structure, copy the JDBC driver into the main folder and configure a module.xml based on the options provided to the CLI. Here is how to do it, supposing that MySQL JDBC driver is available in the /home/wildfly folder:module add --name=com.mysql --resources=/home/wildfly/mysql-connector-java-5.1.24-bin.jar --dependencies=javax.api,javax.transaction.api 3. Registering the JDBC driver on the application server Now that your MySQL is available as module on the application server, we will register it as JDBC driver. When using the CLI, this is a one-step operation:/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql) Now, you can check that your MySQL driver is enlisted through the available JDBC Drivers. Here is how you can achieve it:[standalone@localhost:9990 /] /subsystem=datasources:installed-drivers-list { . . . . . . . . . . . "driver-name" => "mysql", "deployment-name" => undefined, "driver-module-name" => "com.mysql", "module-slot" => "main", "driver-datasource-class-name" => "", "driver-xa-datasource-class-name" => "", "driver-class-name" => "com.mysql.jdbc.Driver", "driver-major-version" => 5, "driver-minor-version" => 1, "jdbc-compliant" => false } ] } 4. Configuring a datasource which uses the JDBC driver The last step will actually create a datasource to be used by your applications. For this purpose, we will use the CLI data-source shortcut command, which requires as input the Pool name, the JNDI bindings, the JDBC Connection parameters and finally the security settings (username and password):data-source add --jndi-name=java:/jdbc/mysqlds --name=MySQLPool --connection-url=jdbc:mysql://localhost:3306/mydatabase --driver-name=mysql --user-name=jboss --password=jbossPlease note that the JNDI name for the Datasource must use the prefix java:/ to be accepted as valid. Therefore the binding used in tomcat (“jdbc/mysqlds”) has been changed to “java:/ jdbc/mysqlds”.5. Configuration output If you have followed the above steps, you should have the following datasource configuration available in your datasource section:<datasources> <datasource jndi-name="java:/jdbc/mysqlds" pool-name="MySQLPool" enabled="true"> <connection-url>jdbc:mysql://localhost:3306/mydatabase</connection-url> <driver>mysql</driver> <security> <user-name>jboss</user-name> <password>jboss</password> </security> </datasource> <drivers> <driver name="mysql" module="com.mysql"/> </drivers> </datasources> 6. Porting Datasource parameters to WildFly Configuring the datasource on the application server is the first milestone for porting your applications on WildFly. Odds are however that you are using some specific connection pool settings that need to be ported on the application server. Some of these parameters have an identical match on WildFly, some others are based on different pooling strategies; therefore you need to adapt the configuration when porting them to the application server. Let’s see how to port the most common pool options to WildFly: 6.1. Minimum and Maximum pool size Choosing the right pool size is a must for the performance of your applications.  Tomcat minimum pool size is determined by the minIdle parameter and the max pool size is configured through maxActive. The initial size of the pool, on the other hand, is configured with the initialSize parameter. Here is a sample configuration:<Resource name="jdbc/mysqlds" auth="Container" type="javax.sql.DataSource" maxActive="100" minIdle="30" initialSize=”15” username="tomcat" password="tomcat" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/mydatabase"/> When porting to WildFly, the parameters are named respectively min-pool-size and max-pool-size and can be set with any management instrument. Here’s how to change them for the default datasource:/subsystem=datasources/data-source=ExampleDS/:write-attribute(name=min-pool-size,value=30) /subsystem=datasources/data-source=ExampleDS/:write-attribute(name=max-pool-size,value=100) The initial pool size, on the other hand, can be set using:/subsystem=datasources/data-source=ExampleDS/:write-attribute(name=initial-pool-size,value=5) Additionally note that WildFly has also an attribute named pool-prefill, which determines whether to attempt to prefill the connection pool to the minimum number of connections:/subsystem=datasources/data-source=ExampleDS/:write-attribute(name=pool-prefill,value=true) 6.2. Dealing with Idle connections Connections that have been created but haven’t been used by your applications are classified as idle connections. WildFly and Tomcat have a different way to deal with idle connections. More in detail, Tomcat uses both a minIdle and maxIdle parameter to determine respectively the minimum and maximum of idle connections that should be kept in the pool. We have already discussed about the minIdle parameter, which can be turned to be WildFly min-pool-size. On the other hand, the maxIdle parameter has not corresponding match on WildFly. The closest match is the idle-timeout-minutes, which is the number of minutes after which unused connections are closed (default 15 minutes). You can actually vary this parameter let’s say to 10 minutes as follows:/subsystem=datasources/data-source=ExampleDS/:write-attribute(name=idle-timeout-minutes,value=10) 6.3. Setting a timeout when acquiring connections If all your connections in a pool are busy, your applications will obviously have to wait for a connection to be released. As you can imagine, there is a timeout for this scenario, which is handled by the maxWait parameter in Tomcat. In the following example the timeout is set to 30 seconds:<Resource name="jdbc/mysqlds" auth="Container" type="javax.sql.DataSource" maxWait="30000" . . . . /> WildFly has a corresponding parameter named blocking-timeout-wait-millis ; in the following CLI command we are setting it to 1 second (1000 ms):/subsystem=datasources/data-source=ExampleDS/:write-attribute(name=blocking-timeout-wait-millis,value=1000) 6.4. Handling Connection leaks A connection leak is called in Tomcat terms an “Abandoned connection”, which happens when you are creating a Statement, or a PreparedStatement or CallableStatement and you miss to close the connection or the statement (or more often, you don’t include the Connection close in a finally block). You can handle abandoned connections in Tomcat by enabling the removeAbandoned parameter. If set to true a connection is considered abandoned and eligible for removal if it has not been used for longer than the removeAbandonedTimeout (default 300 seconds).<Resource name="jdbc/mysqlds" auth="Container" type="javax.sql.DataSource" removeAbandoned ="true" removeAbandonedTimeout=”300” . . . . /> On the WildFly side, there is no corresponding tweak for abruptly closing connections that are qualified as abandoned. On the other hand, there are some useful parameters, which can be used to detect or trace the issue. If you are concerned about Statements (and PreparedStatements), you can use the track-statements parameter that checks for unclosed statements when a connection is returned to the pool and result sets are closed when a statement is closed/return to the prepared statement cache. Valid values are:false: do not track statements and results true: track statements and result sets and warn when they are not closed nowarn: track statements but do no warn about them being unclosedHere is how to set this parameter to use NOWARN:/subsystem=datasources/data-source=ExampleDS/:write-attribute(name=track-statements,value=NOWARN) In addition, you can enable debugging of the caching layer, which is handled in WildFly by the Cached Connection Managed, part of the Connector subsystem (JCA)./subsystem=jca/cached-connection-manager=cached-connection-manager/:write-attribute(name=debug,value=true) You should enable also the error parameter, which will let you detect any error connected with the cached connection manager:/subsystem=jca/cached-connection-manager=cached-connection-manager/:write-attribute(name=error,value=true) Once that you have enabled logging, you will see for each connection acquired from the pool the following information:DEBUG [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (MSC service thread 1-4) {JNDI_NAME}: getConnection(null, null) [1/100] On the other hand, you will read the following message when connections are returning to the pool:DEBUG [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (MSC service thread 1-4) {JNDI_NAME}: returnConnection(607e334, false) [1/99] If you are debugging the statements emitted by your applications, you can easily find out where the connection leakage is. Finally, if you want some details about what is going on at JDBC level, you can log the JDBC statements by setting to true the property named spy:/subsystem=datasources/data-source=MySQLPool/:write-attribute(name=spy,value=true) In order to debug the JDBC statements in your server logs, you need to create a logger element, which traces the jboss.jdbc.spy package. You can do it as follows:/subsystem=logging/logger=jboss.jdbc.spy/:add(level=TRACE) Reload your server configuration and check the server logs, which are by default contained into JBOSS_HOME/standalone/log/server.log (standalone mode) or JBOSS_HOME/domain/[server-name]/log/server.log (domain mode). 6.5. Configuring Statement Cache A Prepared Statement is a pre-compiled object on the database whose access plan will be reused to execute further queries much quicker than normal queries. Prepared statements can be also cached by the application server itself when it’s necessary to issue the same statements across different requests. Tomcat’s jdbc-pool capabilities can manage the Prepared Statement cache using a JDBC interceptor, which is set as a JDBC property during pool creation. For example:<Resource name="jdbc/mysqlds" auth="Container" type="javax.sql.DataSource" jdbcInterceptors="StatementCache(prepared=true,callable=false,max=50)" </Resource> When running on WildFly, you can set the PreparedStatement cache size by writing the prepared-statements-cache-size attribute as in the following example, which sets its size to 25 statements:/subsystem=datasources/data-source=ExampleDS/:write-attribute(name=prepared-statements-cache-size,value=25) This excerpt has been taken from the “From Tomcat to WildFly” book in which you’ll learn how to port your existing Tomcat architectures to WildFly, including both the server configuration and the applications running on the top of it. You will also get exposure to the most common pitfalls and drawbacks, which might happen during the migration. Apache Tomcat is a popular Web server and Servlet Container developed as an open-source project by the Apache Software Foundation since 1999. Today it is one of the most widely used platform for running Web applications both in simple sites and in large networks. Nevertheless, the amount of libraries available on Apache Tomcat are usually just enough for very simple architectures which require just the HTTP protocol and a limited number of services; this caused the natural tendency to increase its capabilities with new extensions or modules such as Spring, Hibernate, JDO or Struts. Although the purpose of this book is not to cast in a bad light on these architectures, we do believe that a comparative knowledge of other solutions can help you to choose the best for your projects. ...
jooq-logo-black-100x80

A RESTful JDBC HTTP Server built on top of jOOQ

The jOOQ ecosystem and community is continually growing. We’re personally always thrilled to see other Open Source projects built on top of jOOQ. Today, we’re very happy to introduce you to a very interesting approach at combining REST and RDBMS by Björn Harrtell.Björn Harrtell is a swedish programmer since childhood. He is usually busy writing GIS systems and integrations at Sweco Position AB but sometimes he spends time getting involved in Open Source projects and contributing to a few pieces of work related to Open Source projects like GeoTools and OpenLayers. Björn has also initiated a few minor Open Source projects himself and one of the latest projects he’s been working on is jdbc-http-server. We’re excited to publish Björn’s guest post introducing his interesting work: JDBC HTTP Server Ever found yourself writing a lot of REST resources that do simple CRUD against a relational database and felt the code was repeating itself? In that case, jdbc-http-server might be a project worth checking out. jdbc-http-server exposes a relational database instance as a discoverable REST API making it possible to perform simple CRUD from a browser application without requiring any backend code to be written. A discoverable REST API means you can access the root resource at / and follow links to subresources from there. For example, let’s say you have a database named testdb with a table named testtable in the public schema you can then do the following operations: Retrieve (GET), update (PUT) or delete (DELETE) a single row at: /db/testdb/schemas/public/tables/testtable/rows/1 Retrieve (GET), update (PUT) rows or create a new row (POST) at: /db/testdb/schemas/public/tables/testtable/rows The above resources accepts parameters select, where, limit, offset and orderby where applicable. Examples: GET a maximum of 10 rows where cost>100 at: /db/testdb/schemas/public/tables/testtable/rows?where=cost>100&limit=10 jdbc-http-server is database engine agnostic since it utilizes jOOQ to generate SQL in a dialect suited to the target database engine. At the moment H2, PostgreSQL and HSQLDB are covered by automated tests. Currently the only available representation data format is JSON but adding more is an interesting possibility. Feedback and, of course, contributions are welcome !Reference: A RESTful JDBC HTTP Server built on top of jOOQ from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
javafx-logo

JavaFX Tip 15: ListView Autoscrolling

I recently had to implement autoscrolling functionality for FlexGanttFX and thought that my solution might be useful for others. You find the basic concepts of it in the listing below. The main idea is that a background thread is used to adjust the pixel location of the virtual flow node used by the list view. The thread starts when a drag over is detected “close” to the top or bottom edges. “Close” is defined by a proximity variable. This code can obviously be improved by using a property for the proximity value and the types “Task” and “Service” for the threading work.     package com.dlsc;import javafx.application.Platform; import javafx.scene.Node; import javafx.scene.control.ListView; import javafx.scene.input.ClipboardContent; import javafx.scene.input.DragEvent; import javafx.scene.input.Dragboard; import javafx.scene.input.MouseEvent; import javafx.scene.input.TransferMode; import javafx.scene.layout.Region;/* * Yes, unfortunately we need to use private API for this. */ import com.sun.javafx.scene.control.skin.VirtualFlow;public class AutoscrollListView<T> extends ListView<T> {final double proximity = 20;public AutoscrollListView() { addEventFilter(MouseEvent.DRAG_DETECTED, evt -> startDrag()); addEventFilter(DragEvent.DRAG_OVER, evt -> autoscrollIfNeeded(evt)); addEventFilter(DragEvent.DRAG_EXITED, evt -> stopAutoScrollIfNeeded(evt)); addEventFilter(DragEvent.DRAG_DROPPED, evt -> stopAutoScrollIfNeeded(evt)); addEventFilter(DragEvent.DRAG_DONE, evt -> stopAutoScrollIfNeeded(evt)); }private void startDrag() { Dragboard db = startDragAndDrop(TransferMode.MOVE); ClipboardContent content = new ClipboardContent();/* * We have to add some content, otherwise drag over * will not be called. */ content.putString("dummy"); db.setContent(content); }private void autoscrollIfNeeded(DragEvent evt) { evt.acceptTransferModes(TransferMode.ANY);/* * Determine the "hot" region that will trigger automatic scrolling. * Ideally we use the clipped container of the list view skin but when * the rows are empty the dimensions of the clipped container will be * 0x0. In this case we try to use the virtual flow. */ Region hotRegion = getClippedContainer(); if (hotRegion.getBoundsInLocal().getWidth() < 1) { hotRegion = this; if (hotRegion.getBoundsInLocal().getWidth() < 1) { stopAutoScrollIfNeeded(evt); return; } }double yOffset = 0;// y offsetdouble delta = evt.getSceneY() - hotRegion.localToScene(0, 0).getY(); if (delta < proximity) { yOffset = -(proximity - delta); }delta = hotRegion.localToScene(0, 0).getY() + hotRegion.getHeight() - evt.getSceneY(); if (delta < proximity) { yOffset = proximity - delta; }if (yOffset != 0) { autoscroll(yOffset); } else { stopAutoScrollIfNeeded(evt); } }private VirtualFlow<?> getVirtualFlow() { return (VirtualFlow<?>) lookup("VirtualFlow"); }private Region getClippedContainer() {/* * Safest way to find the clipped container. lookup() does not work at * all. */ for (Node child : getVirtualFlow().getChildrenUnmodifiable()) { if (child.getStyleClass(). contains("clipped-container")) { return (Region) child; } }return null; }class ScrollThread extends Thread { private boolean running = true; private double yOffset;public ScrollThread() { super("Autoscrolling List View"); setDaemon(true); }@Override public void run() {/* * Some initial delay, especially useful when * dragging something in from the outside. */try { Thread.sleep(300); } catch (InterruptedException e1) { e1.printStackTrace(); }while (running) {Platform.runLater(() -> { scrollY(); });try { sleep(15); } catch (InterruptedException e) { e.printStackTrace(); } } }private void scrollY() { VirtualFlow<?> flow = getVirtualFlow(); flow.adjustPixels(yOffset); }public void stopRunning() { this.running = false; }public void setDelta(double yOffset) { this.yOffset = yOffset; } }private ScrollThread scrollThread;private void autoscroll(double yOffset) { if (scrollThread == null) { scrollThread = new ScrollThread(); scrollThread.start(); }scrollThread.setDelta(yOffset); }private void stopAutoScrollIfNeeded(DragEvent evt) { if (scrollThread != null) { scrollThread.stopRunning(); scrollThread = null; } } }Reference: JavaFX Tip 15: ListView Autoscrolling from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
docker-logo

How Visi uses Weave and Docker

I’ve gotten a bunch of questions about how Visi, the simple web front end to Spark works. This blog post is an overview. Hosted Spark with a Simple Front End Visi is a hosted Spark cluster with a simple web-based front end that allows Excel-savvy folks to enter formulas that get turned into Spark jobs. The Spark cluster and front end are built on demand, hosted in Docker containers, and communicate over the network using Weave. The web UI is presented to the user via a dynamically updated HAProxy routing table. Nuts and Bolts Backing Visi is a farm of OS-level instances (can’t really call them boxes or hardware, but they are thing computing machines that run Ubuntu.) There are a certain number of machines in the farm “idling” and when the in-use level goes below a threshold, we fire up more instances. When too many instances are idle, we bring down the instances to a sane idle level. Each of the farm instances is connected to a Weave virtual network. The front end is also connected the the Weave virtual network. When a user wants a new Spark cluster and Visi web front end, the executor selects machines from the farm and fires up Docker containers which contain the Spark cluster and Visi web front end, assigning a custom subnet within Weave. The Visi web front end and the HAProxy host are also put on a custom Weave subnet. This means that the Visi front end can talk to the HAProxy box (but not other Visi front end instances) and the Spark cluster. The Spark cluster can talk to other members of the Spark cluster and the Visi front end, but no other Spark clusters. When the web front end signals the executor that it has started and can accept requests, the executor adds a new “backend” to the HAProxy configuration and a front-end route based on a URL and the JSESSIONID of the browser (so only the logged in user can see the route to the back-end Visi instance). End of Session When the executor detects that the user’s browser has disconnected from the Visi instance, the executor causes code to collect the current state of the Visi instance (including the user’s current notebook status) and the Spark instances and pickles this information so it can be unpickled the next time the Visi notebook is requested. Once the pickled information is saved, the executor signals the Spark cluster and Visi instance Docker containers to shut down. The table containing the state of the farm instances as well as the HAProxy configuration are updated. Finally, the excutor returns the Weave subnet used for the Visi instance and Spark cluster to the available pool. Isolation The Docker containers represent isolated execution environments for the Visi front end and the Spark cluster. Both the Visi front end and Spark cluster are executing arbitrary, untrusted code. Docker provides a mechanism for sandboxing the code so one user’s code cannot access another user’s code or data. Weave provides isolation at the network level such that each Spark cluster and Visi front end can only see each other, not the other instances running on the farm. Easy Networking In addition to the isolation provided by the Docker/Weave combination, Weave provides a really simple mechanism for connecting HAProxy to the Visi front ends without having to allocate and manage multiple ports on the farm machines. Further, Spark is at port 7077 on the Weave virtual network rather than a not-normal Spark port. This reduces the logic necessary to allocate resources across the network.Reference: How Visi uses Weave and Docker from our JCG partner David Pollak at the DPP’s Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close