Featured FREE Whitepapers

What's New Here?


OSGi: An Introduction

OSGi, created for Java-based systems, provides a framework for modular systems. OSGi makes it possible to define the dependencies of each individual module with the others and enables users to control the lifecycle and dynamically change each component of the system. OSGi is a specification and the most common implementations can be counted as Equinox, Apache Felix and Knoplerfish. In this article, I will try to give an example of creation of a simple OSGi bundle in Equinox.OSGi Structure   Basic OSGi structure can be seen in figure on the right side. OSGi implementations sits on the top of the JVM and provide mechanisms for service management, component definition, execution, management and life cycle control of modules. The most essential OSGi concepts are described below: Bundle In OSGi systems, the components that builds the structure are given the name, “Bundle“. On the deployment stage, each OSGi bundle is a jar file. But the main difference of bundle jar files with regular ones can be counted as the OSGi specific manifest definition and some OSGi specific classes. We will discuss these differences in oncoming sections and in the example. Services Services provide the interaction between the bundles of the structure. Services are exposed as interfaces and registered with an implementation that performs that interface. In parallel with SOA structure, the being of access through OSGi services makes OSGi based systems more loose-coupled compared to regular jar based java structures. These structure also makes it possible to change change components of the system in runtime. OSGi implements a service directory for services to be registered and accessed through. OSGi also provides mechanisms for management of the services. Life Cycle OSGi provides a platform which allows to control the life cycle of bundles. In this structure, each bundle has its own OSGi configuration mainly in terms of dependencies and exposed parts and the system is run by the OSGi itself. OSGi knows the bundles that compose the system which is given with a configuration file with an order and the life cycle management is applied for each of the components with given order. The bundle side of the life cycle management is controlled by the “Activator” class which implements and OSGi interface that has to exist in every “regular” OSGi bundle. (Not for “fragment” ones, but this is beyond the scope of this article for now, forget about that) Bundles As mentioned above, a bundle is a jar file which has at least an Activator class and a MANIFEST file that has OSGi specific headers and informations in. A sample MANIFEST file can be seen below. Let’s look at the meaning of each part in definition. Bundle-Name: Our Bundle Bundle-SymbolicName: us.elron.bundles.ours Bundle-Description: Very own bundle of ours Bundle-ManifestVersion: 1 Bundle-Version: 1.0.0 Bundle-Activator: us.elron.bundles.ours.BundleActivator Export-Package: us.elron.bundles.ours.exported; version = "1.0.0" Import-Package: us.elron.bundles.yours.exported; version = "1.3.0"Bundle-Name: The “appealing to the public” name of the bundle. Bundle-SymbolicName: As the only mandatory definition in the MANIFEST file, symbolic name defines the unique name of the bundle in OSGi ecosystem. As this definition should be unique, it is generally defined as the base package name of the bundle by convention. Bundle-Description: The description about the “raison d’être” of the bundle. Bundle-ManifestVersion: The manifest version of the bundle. Bundle-Version: OSGi bundle version. Bundle-Activator: This class is used to control the bundle’s life cycle. This class is called by the OSGi to start or stop the bundle. Export-Package: Packages that is wanted to be used by other bundles are defined in this section. Import-Package: Packages that is needed for the execution of current bundle are defined in this section.Life Cycle OSGi structure provides the necessary mechanisms to control the life cycle of the bundles. Bundles are subject to the OSGi for control of their life cycles in accordance with the configuration given. This life cycle steps are explained in detail below :COMPONENT STATUSDESCRIPTIONINSTALLEDThis state indicates that the installation step has been successfully completed. In this case, neither dependency analysis nor the class loading is made. Only required steps are performed, such as defining bundle properties analysing its Manifest file.RESOLVEDBundle is found in this state when OSGi resolves and satisfies all of its dependencies and makes class loading operations. This is the state that comes before starting and after stopping.STARTINGThis is the state that bundle is found when the “start” method of the Activator of the bundle is called, but not yet as successfully or unsuccessfully finished.ACTIVEThe bundle is successfully started and running meaning the “start” method of the Activator resulted success.STOPPINGThis is the state that bundle is found when the “stop” method of the Activator of the bundle is called, but not yet as successfully or unsuccessfully finished.UNINSTALLEDThis is the state when the bundle is removed from the system. In this situation, there is no transition to another state. The component must be installed again.The transitions between described life cycle steps can be seen in the figure above. Let’s make a simple example to clarify mentioned concepts and steps above. In our example, there will be two bundles, one of which gives a random number generator service to generate random numbers, and the other bundle will use this service to print a random number each second with a seperate process. (Doesn’t make sense ? Same for me, but enough to grasp the concepts ) Now let’s develop this sample project (preferably) together using Eclipse and Equinox. In Eclipse, OSGi bundles are developed using New Plug-in Project wizard as seen below: Using the wizard create two projects (us.elron.osgi.random and us.elron.osgi.user), following required steps and name your bundles and Activators (RandomActivator, UserActivator) as seen below. The ultimate result of the project also should be like that :The service definitions, implementations and MANIFEST definitions of the Random number generating bundle (us.elron.osgi.random) are given below. Interface (IRandomGenerator): package us.elron.osgi.random;public interface IRandomGenerator {int generate ();int generate(int upperBound);} Service (RandomGenerator): package us.elron.osgi.random.generator;import java.util.Random;import us.elron.osgi.random.IRandomGenerator;public class RandomGenerator implements IRandomGenerator {private final Random random;public RandomGenerator () { this.random = new Random(); }@ Override public int generate () { return this.random.nextInt(); }@ Override public int generate (final int upperBound) { return this.random.nextInt (upperBound); }} Activator (RandomActivator) : package us.elron.osgi.random;import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext;import us.elron.osgi.random.generator.RandomGenerator;public class RandomActivator implements BundleActivator {public void start(final BundleContext context) throws Exception { System.out.println("[Random] Let's 'Random'!"); RandomGenerator randomGenerator = new RandomGenerator(); context.registerService(IRandomGenerator.class.getName (), randomGenerator, null); System.out.println("[Random] Random services were registered."); }public void stop(final BundleContext context) throws Exception { System.out.println("[Random] Bundle is being stopped !"); }} MANIFEST.MF description of the component will be as follows. A bundle should at least export packages that have its service interfaces in order to make other bundles use of them. Because loose coupling is one of the most important goals of SOA and OSGi systems, only the minimum required set of classes should be exported from any bundle. Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: Random Bundle-SymbolicName: us.elron.osgi.random Bundle-Version: 1.0.0.qualifier Bundle-Activator: us.elron.osgi.random.RandomActivator Bundle-Vendor: ELRON.US Require-Bundle: org.eclipse.core.runtime Bundle-RequiredExecutionEnvironment: JavaSE-1.6 Bundle-ActivationPolicy: lazy Export-Package: us.elron.osgi.random As can be seen, a service is an implementation of a java interface registered as an OSGi service. Activator class is the OSGi access point of a bundle. OSGi uses Activator class of the bundle in order to manage its life-cycle. While doing that, OSGi sends an implementation of “org.osgi.framework.BundleContext” interface to the bundle. This interface enables bundle to interact with the OSGi layer and as can be seen in code, to make operations such as registering and getting an OSGi service. Now let’s look at the user bundle classes: This is the class which prints random numbers that are generated by the random generator service. package us.elron.osgi.user;import us.elron.osgi.random.IRandomGenerator;public class RandomPrinter extends Thread {private final IRandomGenerator random; private volatile boolean run = true;public RandomPrinter (final IRandomGenerator random) { this.random = random; }@ Override public void run () { while (this.run) { System.out.println ("[User] new random number: " + this.random.generate (300)); try { Thread.sleep (1000); } catch (final InterruptedException e) { break; } } System.out.println ("[User] The process was terminated."); }public void close () { this.run = false; }} and this is the Activator implementation: package us.elron.osgi.user;import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext; import org.osgi.framework.ServiceReference;import us.elron.osgi.random.IRandomGenerator;public class UserActivator implements BundleActivator {private RandomPrinter randomPrinter;public void start (final BundleContext context) throws Exception { System.out.println ("[User] Here we go .."); ServiceReference randSrvRef = context.getServiceReference (IRandomGenerator.class.getName ()); IRandomGenerator randService = (IRandomGenerator) context.getService (randSrvRef); if (randService == null) { throw new Exception ("[User] Error! Random service could not be found!"); } this.randomPrinter = new RandomPrinter(randService); this.randomPrinter.start(); }public void stop (final BundleContext bundleContext) throws Exception { System.out.println ("[User] finish .."); this.randomPrinter.close (); }} The MANIFEST.MF description of the “user” bundle will be as follows. We should define the dependency with the “us.elron.osgi.random” package of random generator bundle in which random service interface resides. Dependencies can be defined at the level of bundle or package, however, to reduce the dependencies between bundles, package level dependency is better be preferred as much as possible. Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Name: User Bundle-SymbolicName: us.elron.osgi.user Bundle-Version: 1.0.0.qualifier Bundle-Activator: us.elron.osgi.user.UserActivator Bundle-Vendor: ELRON.US Require-Bundle: org.eclipse.core.runtime Bundle-RequiredExecutionEnvironment: JavaSE-1.6 Bundle-ActivationPolicy: lazy Import-Package: us.elron.osgi.random To run these projects on OSGi using Eclipse, a run configuration should be defined as can be seen below. From “Run (or Debug) Configurations” step, under OSGi Framework, a new configuration should be created (with a right click) and our new bundles should be selected in this configuration. To supply the required dependencies for selected bundles, we can use “Add Required Bundles ” button. In this way, Eclipse will resolve the dependency hierarchy and add the required bundles for selected ones. We should also define the start order of the bundles. This order should be defined in accordance with the dependencies of the bundles. Dependent bundles should start after the bundles they depend. So, in our example, we will set the level of “us.elron.osgi.random” as 1, while setting the “us.elron.osgi.user” as 2. Running the project with this shape generates an output like as follows : OSGi> [Random] Let's 'Random'! [Random] Random services were registered. [User] Here we go .. [User] new random number: 38 [User] new random number: 250 [User] new random number: 94 [User] new random number: 150 [User] new random number: 215 [User] new random number: 124 [User] new random number: 195 [User] new random number: 260 [User] new random number: 276 [User] new random number: 129 OSGi runtime provides a console interface for us to interact with itself. When running the Console application window, we see a “osgi>” the script, says we can access the console. After mentioning about a few important commands that you can execute in console, I will leave you alone with the console for you to discover what can be done there, starting with the “help” command. “ss” command, shows all the components registered to the OSGi with their id, state and bundle name values along with version part. id value indicates a unique identifier given by OSGi to every bundle. This number stays the same in a JVM execution even if the bundle uninstalled and installed again (one thing to discover) but can be changed in a new execution. Status values indicates the status of the bundle (detailed and explained in the table above), and the name and version values indicates what their names evokes to us. For the current system, the output of “ss” the command is as follows: OSGi> ssFramework is launched.id State Bundle 0 ACTIVE org.eclipse.osgi_3.6.0.v20100517 Fragments = 4 2 ACTIVE org.eclipse.core.jobs_3.5.0.v20100515 3 ACTIVE javax.servlet_2.5.0.v200910301333 Resolved javax.transaction_1.1.1.v201006150915 4 Master = 0 5 ACTIVE org.eclipse.core.runtime_3.6.0.v20100505 6 ACTIVE org.eclipse.equinox.preferences_3.3.0.v20100503 7 ACTIVE org.eclipse.osgi.services_3.2.100.v20100503 8 ACTIVE org.eclipse.core.runtime.compatibility.auth_3.2.200.v20100517 9 ACTIVE us.elron.osgi.random_1.0.0.qualifier Resolved org.eclipse.core.runtime.compatibility.registry_3.3.0.v20100520 10 Master = 11 11 ACTIVE org.eclipse.equinox.registry_3.5.0.v20100503 Fragments = 10 12 ACTIVE org.eclipse.equinox.app_1.3.0.v20100512 13 ACTIVE org.eclipse.equinox.common_3.6.0.v20100503 14 ACTIVE org.eclipse.core.contenttype_3.4.100.v20100505 14-1235 15 ACTIVE us.elron.osgi.user_1.0.0.qualifier OSGi> Let’s suppose that we want to turn off our User bundle. In this case, we need to execute “stop” command with the id of the bundle we want to stop (in this case 15). [User] a new random number is: 48 [User] a new random number is: 49 OSGi> stop 15 [User] finish .. [User] The process was terminated. When we look at the output of “ss” command again, Framework is launched.id State Bundle 0 ACTIVE org.eclipse.osgi_3.6.0.v20100517 Fragments = 4 2 ACTIVE org.eclipse.core.jobs_3.5.0.v20100515 3 ACTIVE javax.servlet_2.5.0.v200910301333 Resolved javax.transaction_1.1.1.v201006150915 4 Master = 0 5 ACTIVE org.eclipse.core.runtime_3.6.0.v20100505 6 ACTIVE org.eclipse.equinox.preferences_3.3.0.v20100503 7 ACTIVE org.eclipse.osgi.services_3.2.100.v20100503 8 ACTIVE org.eclipse.core.runtime.compatibility.auth_3.2.200.v20100517 9 ACTIVE us.elron.osgi.random_1.0.0.qualifier Resolved org.eclipse.core.runtime.compatibility.registry_3.3.0.v20100520 10 Master = 11 11 ACTIVE org.eclipse.equinox.registry_3.5.0.v20100503 Fragments = 10 12 ACTIVE org.eclipse.equinox.app_1.3.0.v20100512 13 ACTIVE org.eclipse.equinox.common_3.6.0.v20100503 14 ACTIVE org.eclipse.core.contenttype_3.4.100.v20100505-1235 15 RESOLVED us.elron.osgi.user_1.0.0.qualifierwe see that the state of the User bundle with id 15 is Resolved (see Life Cycle section). Likewise, we can execute start command (start 15) to start the bundle and observe the process began to work again, or execute “s” command to see all services registered to OSGi or use uninstall command to remove a bundle from OSGi. You are free to discover ! In this article, I tried to simply explain what OSGi is, how it works and what can be done with it. Hope you enjoy it. You can download sources here. Feel free to comment or contact via elron[at]elron.us. I will be glad to hear from you. Reference: OSGi: An Introduction  from our JCG partner Elron at the Ender Ayd?n Orak blog. Related Articles :OSGi Using Maven with Equinox OSGI and Spring Dynamic Modules – Simple Hello World OSGi – Simple Hello World with services Java EE6 CDI, Named Components and Qualifiers...

DB unit testing with dbUnit, JSON, HSQLDB and JUnit Rules

In this week’s run of my TDD course, I thought it would be interesting to write a little fixture to make it easier to use dbUnit. My original thought was just to teach dbUnit about JSON, but it turns out that Lieven Doclo has done that already. So I decided to go a step further and also combine dbUnit with JUnit Rules, and provide automatic bootstrapping of an HSQLDB in-memory object store.The following test shows what I ended up with: package com.danhaywood.tdd.dbunit.test;import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat;import java.sql.ResultSet; import java.sql.Statement;import org.dbunit.Assertion; import org.dbunit.dataset.ITable; import org.hsqldb.jdbcDriver; import org.junit.Rule; import org.junit.Test;import com.danhaywood.tdd.dbunit.DbUnitRule; import com.danhaywood.tdd.dbunit.DbUnitRule.Ddl; import com.danhaywood.tdd.dbunit.DbUnitRule.JsonData;public class DbUnitRuleExample {@Rule public DbUnitRule dbUnit = new DbUnitRule( DbUnitRuleExample.class, jdbcDriver.class, "jdbc:hsqldb:file:src/test/resources/testdb", "SA", "");@Ddl("customer.ddl") @JsonData("customer.json") @Test public void update_lastName() throws Exception { // when Statement statement = dbUnit.getConnection().createStatement(); statement.executeUpdate("update customer set last_name='Bloggs' where id=2");// then (verify directly) ResultSet rs2 = dbUnit.executeQuery("select last_name from customer where id = 2"); assertThat(rs2.next(), is(true)); assertThat(rs2.getString("last_name"), equalTo("Bloggs"));// then (verify using datasets) ITable actualTable = dbUnit.createQueryTable("customer", "select * from customer order by id"); ITable expectedTable = dbUnit.jsonDataSet("customer-updated.json").getTable("customer");Assertion.assertEquals(expectedTable, actualTable); } }where customer.ddl is: drop table customer if exists; create table customer ( id int not null primary key ,first_name varchar(30) not null ,initial varchar(1) null ,last_name varchar(30) not null )and customer.json (the initial data set) is: { "customer": [ { "id": 1, "first_name": "John", "initial": "K", "last_name": "Smith" }, { "id": 2, "first_name": "Mary", "last_name": "Jones" } ] }and customer-updated.json (the final data set) is: { "customer": [ { "id": 1, "first_name": "John", "initial": "K", "last_name": "Smith" }, { "id": 2, "first_name": "Mary", "last_name": "Bloggs" } ] }As you’ve probably figured out, the @Ddl annotation optionally specifies DDL script(s) to run against the database, while the @JsonData defines a JSON-formatted dataset. The actual implementation of the DbUnitRule class is: package com.danhaywood.tdd.dbunit;import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; import java.nio.charset.Charset; import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException;import org.dbunit.IDatabaseTester; import org.dbunit.JdbcDatabaseTester; import org.dbunit.database.IDatabaseConnection; import org.dbunit.dataset.DataSetException; import org.dbunit.dataset.IDataSet; import org.dbunit.dataset.ITable; import org.junit.rules.MethodRule; import org.junit.runners.model.FrameworkMethod; import org.junit.runners.model.Statement;import com.google.common.io.Resources;public class DbUnitRule implements MethodRule {@Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.METHOD }) public static @interface Ddl { String[] value(); }@Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.METHOD }) public static @interface JsonData { String value(); }private final Class<?> resourceBase;private IDatabaseTester databaseTester; private IDatabaseConnection dbUnitConnection;private Connection connection; private java.sql.Statement statement;public DbUnitRule(Class<?> resourceBase, Class<?> driver, String url, String user, String password) { this.resourceBase = resourceBase; try { databaseTester = new JdbcDatabaseTester(driver.getName(), url, user, password); dbUnitConnection = databaseTester.getConnection(); connection = dbUnitConnection.getConnection(); statement = connection.createStatement(); } catch (Exception e) { throw new RuntimeException(e); } }@Override public Statement apply(final Statement base, final FrameworkMethod method, final Object target) {return new Statement() {@Override public void evaluate() throws Throwable {try { Ddl ddl = method.getAnnotation(Ddl.class); if (ddl != null) { String[] values = ddl.value(); for (String value : values) { executeUpdate(Resources.toString( resourceBase.getResource(value), Charset.defaultCharset())); } }JsonData data = method.getAnnotation(JsonData.class); if (data != null) { IDataSet ds = new JSONDataSet(resourceBase.getResourceAsStream(data.value())); databaseTester.setDataSet(ds); }databaseTester.onSetup(); base.evaluate(); } finally { databaseTester.onTearDown(); } } }; }public java.sql.Connection getConnection() { return connection; }public void executeUpdate(String sql) throws SQLException { statement.executeUpdate(sql); }public ResultSet executeQuery(String sql) throws SQLException { return statement.executeQuery(sql); }public IDataSet jsonDataSet(String datasetResource) { return new JSONDataSet(resourceBase.getResourceAsStream(datasetResource)); }public ITable createQueryTable(String string, String string2) throws DataSetException, SQLException { return dbUnitConnection.createQueryTable(string, string2); } }This uses Lieven Doclo’s JSONDataSet (copied here for your convenience): import org.codehaus.jackson.map.ObjectMapper;import org.dbunit.dataset.*; import org.dbunit.dataset.datatype.DataType;import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import java.util.*;/** * DBUnit DataSet format for JSON based datasets. It is similar to the flat XML layout, * but has some improvements (columns are calculated by parsing the entire dataset, not just * the first row). It uses Jackson, a fast JSON processor. * <br/><br/> * The format looks like this: * <br/> * <pre> * { * "&lt;table_name&gt;": [ * { * "&lt;column&gt;":&lt;value&gt;, * ... * }, * ... * ], * ... * } * </pre> * <br/> * I.e.: * <br/> * <pre> * { * "test_table": [ * { * "id":1, * "code":"JSON dataset", * }, * { * "id":2, * "code":"Another row", * } * ], * "another_table": [ * { * "id":1, * "description":"Foo", * }, * { * "id":2, * "description":"Bar", * } * ], * ... * } * </pre> * * @author Lieven DOCLO */ public class JSONDataSet extends AbstractDataSet { // The parser for the dataset JSON file private JSONITableParser tableParser = new JSONITableParser();// The tables after parsing private List<ITable> tables;/** * Creates a JSON dataset based on a file * @param file A JSON dataset file */public JSONDataSet(File file) { tables = tableParser.getTables(file); }/** * Creates a JSON dataset based on an inputstream * @param is An inputstream pointing to a JSON dataset */ public JSONDataSet(InputStream is) { tables = tableParser.getTables(is); }@Override protected ITableIterator createIterator(boolean reverse) throws DataSetException { return new DefaultTableIterator(tables.toArray(new ITable[tables.size()])); }private class JSONITableParser {private ObjectMapper mapper = new ObjectMapper();/** * Parses a JSON dataset file and returns the list of DBUnit tables contained in * that file * @param jsonFile A JSON dataset file * @return A list of DBUnit tables */ public List<ITable> getTables(File jsonFile) { try { return getTables(new FileInputStream(jsonFile)); } catch (IOException e) { throw new RuntimeException(e.getMessage(), e); } }/** * Parses a JSON dataset input stream and returns the list of DBUnit tables contained in * that input stream * @param jsonStream A JSON dataset input stream * @return A list of DBUnit tables */ @SuppressWarnings("unchecked") public List<ITable> getTables(InputStream jsonStream) { List<ITable> tables = new ArrayList<ITable>(); try { // get the base object tree from the JSON stream Map<String, Object> dataset = mapper.readValue(jsonStream, Map.class); // iterate over the tables in the object tree for (Map.Entry<String, Object> entry : dataset.entrySet()) { // get the rows for the table List<Map<String, Object>> rows = (List<Map<String, Object>>) entry.getValue(); ITableMetaData meta = getMetaData(entry.getKey(), rows); // create a table based on the metadata DefaultTable table = new DefaultTable(meta); int rowIndex = 0; // iterate through the rows and fill the table for (Map<String, Object> row : rows) { fillRow(table, row, rowIndex++); } // add the table to the list of DBUnit tables tables.add(table); }} catch (IOException e) { throw new RuntimeException(e.getMessage(), e); } return tables; }/** * Gets the table meta data based on the rows for a table * @param tableName The name of the table * @param rows The rows of the table * @return The table metadata for the table */ private ITableMetaData getMetaData(String tableName, List<Map<String, Object>> rows) { Set<String> columns = new LinkedHashSet<String>(); // iterate through the dataset and add the column names to a set for (Map<String, Object> row : rows) { for (Map.Entry<String, Object> column : row.entrySet()) { columns.add(column.getKey()); } } List<Column> list = new ArrayList<Column>(columns.size()); // create a list of DBUnit columns based on the column name set for (String s : columns) { list.add(new Column(s, DataType.UNKNOWN)); } return new DefaultTableMetaData(tableName, list.toArray(new Column[list.size()])); }/** * Fill a table row * @param table The table to be filled * @param row A map containing the column values * @param rowIndex The index of the row to te filled */ private void fillRow(DefaultTable table, Map<String, Object> row, int rowIndex) { try { table.addRow(); // set the column values for the current row for (Map.Entry<String, Object> column : row.entrySet()) { table.setValue(rowIndex, column.getKey(), column.getValue());} } catch (Exception e) { throw new RuntimeException(e.getMessage(), e); } } } }The libraries I used for this (ie are dependencies) are:hsqldb 2.2.6 dbunit 2.4.8 jackson 1.9.3 slf4j-api-1.6.4, slf4j-nop-1.6.4 google-guava 10.0.1 junit 4.8As ever, comments are welcomed. Reference: DB unit testing with dbUnit, JSON, HSQLDB and JUnit Rules from our JCG partner Dan Haywood at the Dan Haywood blog. Related Articles :Rules in JUnit 4.9 (beta 3) Spring 3 Testing with JUnit 4 – ContextConfiguration and AbstractTransactionalJUnit4SpringContextTests Java RESTful API integration testing When to replace Unit Tests with Integration Test My Testing and Code Analysis Toolbox...

Pair Programming: The disadvantages of 100% pairing

I’ve written a lot of blog posts in the past about pair programming and the advantages that I’ve seen from using this technique but lately I find myself increasingly frustrated at the need to pair 100% of the time which happens on most teams I work on. From my experience it’s certainly useful as a coaching tool, as I’ve mentioned before I think it’s a very useful for increasing the amount of collaboration between team members and an excellent way for ensuring that knowledge of the code base is spread across the team. On the other hand I no longer see it as the silver bullet which I did previously and I think we do lose some useful things if people get forced to pair all the time.Time to explore the code Mark Wilden wrote a blog post about 18 months ago where he pointed out the following: Pair programming doesn’t encourage quiet reflection and exploration. You can’t just sit back and read some code. You can’t just sit and think. I mean, you can, but then your pair is just sitting there. On the project I’m working on at the moment we pair on everything so if you want to spend some time scanning through the code and seeing if there’s ways to improve it then you tend to end up doing that in your own time. If I start to explore the code or do some scratch refactoring, while pairing, to see how the code would look if we structure it slightly differently then unless my pair is interested in what I’m doing then more often than not they’ll start playing with their phone. The alternative is to try and explain exactly what you’re trying to do but more often than not you’re not entirely sure or it takes longer to explain than to try it out. I think we can easily create this time for people in the day by just agreeing to pair maybe for 7 1/2 hours in a day instead of 8 hours. In the name of (short term) productivity When you’re working as a pair one of the things that it’s supposed to improve is the productivity of both people – it’s much easier to get distracted by something or go down a rabbit hole when you’re on your own than if you’re pairing. On the other hand I’ve started to wonder whether what we’re actually achieving is short term productivity. In my experience house cleaning tasks such as investigating why a test is ‘randomly’ failing on the CI machine are less likely to be looked at if everyone on the team is pairing 100% of the time because it interferes with productivity of a story. There is some logic to that because investigating things like that can lead you down a rabbit hole but not addressing them isn’t particularly helpful either since they’re bound to ‘randomly’ fail again in the future and cause pain. I think pairing can also be detrimental to learning how to use new tools although I can certainly understand the argument that you should be learning things like that in your own time anyway. For example I know a little bit of Sed and Awk and there are often times when we need to do text replacement across a series of files and it’s very boring for my pair to watch while I try and work out exactly how to do that. More often than not we end up doing that task manually which is slower but less frustrating for the other person. Diminishing returns I think pairing works very well when there’s a new problem to solve and there’s some thinking to be done around how to design code but it tends to diminish once we’ve built the cookie cutter. A reasonable amount of the work required to develop a standard web application is quite mundane once you’ve worked out how to do it any subsequent work in that area tends to be about following the established pattern. It might not be the best pattern but in my experience it’s less likely that you’ll go against the pattern if you’re pairing since you’ll have your ‘productivity’ hat on. There is an argument that if you’re pairing and it’s boring then you should find a way to automate that problem or make it possible to write less code. There have been times when I’ve seen pairs do this but I’d say in a lot of cases there isn’t a significantly better way to solve the problem. Jay Fields recently written a post about his experiences after pair programming and while I don’t think the types of projects I’ve worked on are ready for his approach I don’t think 100% pairing is the answer either. Reference: Pair Programming: The disadvantages of 100% pairing  from our JCG partner Markh Needham at the Mark Needham Blog. Related Articles :How extreme is extreme programming? How to start a Coding Dojo...

Debugging the JVM

In some (rare) cases you might find yourself in the situation that you managed to crash the JVM itself. I most recently managed this by setting the name of a ThreadGroup to null. In these cases it is useful to debug the JVM itself so that the crash can be located more precisely. Here are the steps to do it (they are Linux specific since there is no readily available debugger under Windows):Install gdb (under Ubuntu this would be something like: sudo apt-get install build-essential) If you are using OpenJDK, install the debugging symbols for it so that the debugger can give a more readable output (again, under Ubuntu this would be sudo apt-get install openjdk-6-dbg – substitute 6 with a 7 if you are using the latest OpenJDK)Now just prefix your java command with gdb --args: gdb --args java Foo When the gdb prompt comes up (“(gdb)”), type “run” (without the quotes) to start the actual running of the program. After the crash happens you should a message like the following: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x6b195b70 (LWP 30273)] (gdb)Here you can use the commands “backtrace” and “backtrace full” to get an approximate idea of the crashsite. To continue running (although it will just exit most probably) input “c”. To exit (killing the JVM in the process) type “quit”. Consult the GDB tutorials available on the Internet for more commands and their parameters. If you are debugging from inside Eclipse, you can do the following: in the configuration properties set the JRE to “Alternate JRE” and specify the Java executable as “javag” (also, make sure that you have “Allocate console” checked in the Common tab).Now go to your JDK run directory (/usr/lib/jvm/java-7-openjdk-i386/bin in my case) and create a javag file (sudo vim javag) with the following content: #!/bin/bash gdb -x '/usr/lib/jvm/java-7-openjdk-i386/bin/javag-commands' --args '/usr/lib/jvm/java-7-openjdk-i386/bin/java' $*Also create the javag-commands file with the following content run Finally, make javag executable (sudo +x chmod javag) and you’re good to go! This workaround is necessary because Eclipse doesn’t accept absolute paths in the configuration tab. The second file is used to automatically pass the “run” command to gdb rather than the user having to type it themselves on each start. Also, keep in mind that while GDB has suspended the process Java debuggers (like Eclipse) can’t communicate with it so it is normal for them to throw all kind of errors (like “target not responding”). Have a bugfree year, but if you find bugs, let them be at least reproducible Reference: Debugging the JVM from our JCG partner Attila-Mihaly Balazs at the Transylvania JUG blog. Related Articles :JVM options: -client vs -server How to solve production problems Debugging a Production Server – Eclipse and JBoss showcase Monitoring OpenJDK from the CLI How many bugs do you have in your code?...

Best Of The Week – 2012 – W01

Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew Java Code Geeks attention: * DevOps: What it is, and what it is not: In this article the author tries to demystify DevOps, discussing what it is and what is not. Also check out Devops has made Release and Deployment Cool and Devops: How NOT to collect configuration management data. * 12 resolutions for programmers: New year’s resolutions for programmers, including taking up an analog activity, staying healthy, learning a new programming language, focusing on security and others. I don’t know if there is enough time for all of these, but good hints anyway. * RESTFul validation using Spring: A nice tutorial showing how to perform custom REST validation using Spring. Note that this is a three part tutorial series. Also check out Develop Restful web services using Spring MVC and RESTful Web Services with RESTeasy JAX-RS on Tomcat 7. * Method Validation With Spring 3.1 and Hibernate Validator 4.2: A nice article presenting how to get started with method level validation using Spring and Hibernate Validator. * Rethinking The Technical Resume:This article provides some nice pointers on how to think about you resume, stating that effectively a resume is a marketing tool. Avoid mistakes like listing everything you’ve done or writing paragraphs of text . * Example of RESTful webservice with XML and JSON using Maven, JAXB, Jersey, Tomcat and cURL : This article provides a nice example of implementing a RESTful web service with Jersey and JAXB and shows how to deploy it on Tomcat and test it with cURL. Also check out Simplifying RESTful Search and Spring 3 RESTful Web Services. * Pay Your Programmers $200/hour: A controversial article suggesting that (capable) programmers should be paid $200 per hour, something that would have benefits both for the contractor and the client. Interesting read. * How to Cache a File in Java: Article on how to cache files in Java in order to increase application performance. It discusses an algorithm for caching the file, adata structure for holding the cached content and a cache API for storing cached files. * Actors in Java: An introduction to Actor programming model with Akka in Java. Actor model principles involve no shared (mutable) data and communication through Asynchronous Messages. Also check out Even simpler scalability with Akka through RegistryActor. * Recruiting programmers to your startup: Nice article on how to recruit programmers to your startup. The procedure suggested involves finding candidates, screening candidates, and convincing candidates to join you. Of course, the most important thing to understand is what motivates programmers. * Why Programmers don’t have a High Social Status?: A humorous article trying to solve the mystery of why programmers do not have a high social status as other educated people have. That’s all for this week. Stay tuned for more, here at Java Code Geeks. Cheers, Ilias Related Articles:Best Of The Week – 2011 – W53 Best Of The Week – 2011 – W52 Best Of The Week – 2011 – W51 Best Of The Week – 2011 – W50 Best Of The Week – 2011 – W49 Best Of The Week – 2011 – W48 Best Of The Week – 2011 – W47 Best Of The Week – 2011 – W46 Best Of The Week – 2011 – W45 Best Of The Week – 2011 – W44...

5+1 Sonar Plugins you must not miss

This post is a revision of the original post, published last year and it covered Sonar version 2.8. Many months has passed and during this period the Sonar Team released four(4) new versions of the ultimate quality platform. The latest version ( 2.12 ) now includes JaCoCo in its core implementation and the existing plugin is now deprecated. Since I have included JaCoCo plugin in my previous post in top Sonar plugins I think it’s time to remove it and refine my list. So here is the 5+1 Sonar Plugins you must not miss for 2012!! I would like to clarify though some exceptions I have made prior to my final choice. I have excluded all plugins that have to do with additional languages and IDE to keep this post as much as objective I can. I have also excluded all commercial plugins for obvious reasons. After that assumptions I have limited my selections to the following categories :Additional Metrics Governance Integration Visualization / ReportingSonar itself comes with a variety of features that cover most of the needs of a software development team. However I consider that the following plugins are essential, especially for those that have adapted or trying to adapt agile practices. To be honest it was very difficult to select only 6 plugins!!1.Hudson / Jenkins plugin Although Sonar analysis can be easily triggered from several build tools (maven, ant etc.) I strongly believe that its native integration with the most famous open source CI server makes itself an important part of the continuous integration / deployment practice. The configuration is extremely easy and as proposed the best practice is to trigger Sonar at night builds. Team members can track day by day software quality, automatically, without bothering when a new analysis should run.2.Timeline Plugin (2012 new entry)How many times have you needed to see how much your source code has improved (hopefully) in the last weeks or months? Have you ever tried to compare basic quality indeces in a single graph? Timeline plugin integrates Google Visualization Annotated TimeLine component at project level and provides a flexible way to historical data regarding sonar quality metrics. Moreover it adds version and date milestones on visualization graph by providing in depth details about the evolution of a software project. Extremely useful for all team members ( developers, architects, testers even managers ).3.Useless Code Plugin It may looks similar to the Sonar Core feature named Duplicate Code, but it adds some more metrics, which I think are very useful especially for large or legacy systems. In general it measures how many lines can be removed from your code. It reports what is the number of unused private methods that can be safely removed and the number of unused protected methods exist in the code that can be removed after some more careful code examination. Finally it provides some more details about code duplication informing how duplicate lines are formed (i.e. x blocks of y lines )4.SIG Maintainability Model Plugin This plugin, as its name implies is an implementation of the Software Improvement Group(SIG) Maintainability Model. It reports ranking – from — very bad) to ++ (very good) on the following base indicators:Analysability, Changeability, Stability and Testability. The core idea for this ranking is to measure a series of base metrics such as Lines of Code(LOC), Duplications,Unit Tests,Complexity and Unit Size. Each of these metrics is then accounted into some of the mentioned indicators and the final result is representing the overall maintainability of the project. We can see the results of this analysis in a graphical (spider) presentation with all four axes of the model. With a glance a this graph you have a global and detailed at the same time view of how easy is to change and maintain your codebase. For me it is the first index I check every morning and if something is not + or ++ then we definitely have done something wrong5.Quality Index Plugin Have you ever wanted to check a single number (indicator) and understand how healthy is your project? I am sure you have!! Well, the quality Index plugin is exactly what you are looking for. The plugin combines four weighted axes (complexity, coding violations, style violations, test coverage) of quality and produces a ranking between 0 (lowest) and 10(highest). Moreover it calculates a method complexity factor based on the complexity factor mentioned above. Have you ever tried to get a ranking of 10 with this plugin? I think it worths the effort! 6.Technical Debt Plugin Last, but not least, the plugin that reports about the interest you have to pay as a developer, as a team, as a company. Technical debt is a term invented by Ward Cunningham to remind us that if we don’t pay our interest from time to time, then it is for sure that eventually this will make our software unmaintainable and hard to add new features or even find the root cause of defect. The plugin, which has a very powerful configuration, represents technical debt in four dimensions.Debt Ratio : The percentage of current technical debt to the maximum possible technical debt. Cost to reimburse : Cost in your currency to pay all interest and clean up your code Work to reimburse : Same as above measured in man days. Breakdown : Distribution to the following axes: Duplication, Violations, Complexity, Coverage, Documentation and DesignBe sure that you check its measures to avoid find yourself in bad situation like spaghetti code I am pretty sure that there are plenty of interesting Sonar plugins so please feel free to post your comments with your list of them. Reference: 5+1 Sonar Plugins you must not miss from our JCG partner Papapetrou P. Patroklos at the Only Software matters blog. Related Articles :My Testing and Code Analysis Toolbox Services, practices & tools that should exist in any software development house, part 1 Java Tools: Source Code Optimization and Analysis Measuring Code Complexity Using FindBugs to produce substantially less buggy code...

Learning Android: Getting a service to communicate with an activity

In the app I’m working on I created a service which runs in the background away from the main UI thread consuming the Twitter streaming API using twitter4j. It looks like this: public class TweetService extends IntentService { String consumerKey = "TwitterConsumerKey"; String consumerSecret = "TwitterConsumerSecret"; public TweetService() { super("Tweet Service"); } @Override protected void onHandleIntent(Intent intent) { AccessToken accessToken = createAccessToken(); StatusListener listener = new UserStreamListener() { // override a whole load of methods - removed for brevity public void onStatus(Status status) { String theTweet = status.getText(); if (status.getText().contains("http://")) { // do something with the tweet } } }; ConfigurationBuilder configurationBuilder = new ConfigurationBuilder(); configurationBuilder.setOAuthConsumerKey(consumerKey); configurationBuilder.setOAuthConsumerSecret(consumerSecret); TwitterStream twitterStream = new TwitterStreamFactory(configurationBuilder.build()).getInstance(accessToken); twitterStream.addListener(listener); twitterStream.user(); } }That gets called from MyActivity like so: public class MyActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { ... super.onCreate(savedInstanceState); Intent intent = new Intent(this, TweetService.class); startService(intent); } }I wanted to be able to inform the UI each time there was a tweet which contained a link in it so that the link could be displayed on the UI. I found a post on StackOverflow which suggested that one way to do this would be to raise a broadcast message which could then be listened to by a BroadcastReceiver in the activity. It is possible for any other apps to listen to the broadcast message as well if they wanted to but in this case the information isn’t very important so I think it’s fine to take this approach. I first had to change the service to look like this: public class TweetTask { public static final String NEW_TWEET = "tweet_task.new_tweet"; } public class TweetService extends IntentService { String consumerKey = "TwitterConsumerKey"; String consumerSecret = "TwitterConsumerSecret"; public TweetService() { super("Tweet Service"); } @Override protected void onHandleIntent(Intent intent) { AccessToken accessToken = createAccessToken(); StatusListener listener = new UserStreamListener() { // override a whole load of methods - removed for brevity public void onStatus(Status status) { String theTweet = status.getText(); if (status.getText().contains("http://")) { Intent tweetMessage = new Intent(TweetTask.NEW_TWEET); tweetMessage.putExtra(android.content.Intent.EXTRA_TEXT, document); sendBroadcast(tweetMessage); } } }; ConfigurationBuilder configurationBuilder = new ConfigurationBuilder(); configurationBuilder.setOAuthConsumerKey(consumerKey); configurationBuilder.setOAuthConsumerSecret(consumerSecret); TwitterStream twitterStream = new TwitterStreamFactory(configurationBuilder.build()).getInstance(accessToken); twitterStream.addListener(listener); twitterStream.user(); } }I then had to define the following code in MyActivity: public class MyActivity extends Activity { protected void onResume() { super.onResume(); if (dataUpdateReceiver == null) dataUpdateReceiver = new DataUpdateReceiver(textExtractionService); IntentFilter intentFilter = new IntentFilter(TweetTask.NEW_TWEET); registerReceiver(dataUpdateReceiver, intentFilter); } protected void onPause() { super.onPause(); if (dataUpdateReceiver != null) unregisterReceiver(dataUpdateReceiver); } private class DataUpdateReceiver extends BroadcastReceiver { private CachedTextExtractionService textExtractionService; public DataUpdateReceiver(CachedTextExtractionService textExtractionService) { this.textExtractionService = textExtractionService; } @Override public void onReceive(Context context, Intent intent) { if (intent.getAction().equals(TweetTask.NEW_TWEET)) { // do something with the tweet } } } }Now whenever there’s a tweet with a link in it my BroadcastReceiver gets notified and I can do whatever I want with the tweet. This seems like a reasonably simple solution to the problem so I’d be interested to know if there are any other drawbacks other than the one I identified above. Reference: Learning Android: Getting a service to communicate with an activity from our JCG partner Markh Needham at the Mark Needham Blog. Related Articles :Android Game Postmortem – ArkDroid Development Android Tutorial: Gestures in your app Android Google Maps Tutorial Android: Menu Class Investigation...

Introducing Deliberate Caching

A few weeks ago I attended a ThoughtWorks Technology Radar seminar. I worked at ThoughtWorks for years and think if anyone knows what is trending up and down in software development these guys do. At number 17 in Techniques with a rising arrow is what they called Thoughtful Caching. At drinks with Scott Shaw, I asked him what it meant. What the trend is about is the movement from reactive caching to a new style. By reactive I mean you find out your system doesn’t perform or scale after you build it and it is already in production. Lots of Ehcache users come to it that way. This is a trend I am very happy to see. Deliberate Caching The new technique is:proactive planned implemented before the system goes live deliberate is more than turning on caching in your framework and hoping for the best – this is the Thoughtful part uses an understanding of the load characteristics and data access patternsWe kicked around a few names for this and came up with Deliberate Caching to sum all of this up. The work we are doing standardising Caching for Java and JVM based languages, JSR107, will only aid with this transition. It will be included in Java EE 7 which even for those who have lost interest in following EE specifically will still send a signal that this is an architectural decision which should be made deliberately. Why it has taken this long?So, why has it taken until 10 years after Ehcache and Memcache and plenty of others came along for this “new” trend to emerge? I think there are a few reasons. Some people think caching is dirtyI have met plenty of developers who think that caching is dirty. And caching is cheating. They think it indicates some architectural design failure that is best of being solved some other way.One of the causes of this is that many early and open source caches (including Ehcache) placed limits on the data safety that could be achieved. So the usual situation is that the data in the cache might but was not sure to be correct. Complicated discussions with Business Analysts were required to find out whether this was acceptable and how stale data was allowed to be. This has been overcome by the emergence of enterprise caches, such as Enterprise Ehcache, so named because they are feature rich and contain extensive data safety options, including in Ehcache’s case: weak consistency, eventual consistency, strong consistency, explicitly locking, Local and XA transactions and atomic operations. So you can use caching even in situations where the data has to be right. Following the lead of giant dotcomThe other thing that has happened is that as giant dotcoms it cannot have escaped anyone’s notice that they all use tons of caching. And that they won’t work if the caching layer is down. So much so that if you are building a big dot com app it is clear that you need to build a caching layer in. Early Performance Optimisation is seen as an anti -pattern Under Agile we focus on the simplest thing that can possibly work. Requirements are expected to keep changing. Any punts you take on future requirements may turn out to be wrong and your effort wasted. You only add things once it is clear they are needed. Performance and scalability tend to get done this way as well. Following this model you find out about the requirement after you put the app in production and it fails. This same way of thinking causes monolithic systems with single data stores to be built which later turn out to need expensive re-architecting. I think we need to look at this as Capacity Planning. If we get estimated numbers at the start of the project for number of users, required response times, data volumes, access patterns etc then we can capacity plan the architecture as well as the hardware. And in that architecture planning we can plan to use caching. Because caching affects how the system is architected and what the hardware requirements are, it makes sense to do it then. Reference: Introducing Deliberate Caching from our JCG partner Greg Luck at the Greg Luck’s Blog. Related Articles :The new Java Caching Standard (javax.cache) High performance JPA with GlassFish and Coherence – Part 1 Spring 3.1 Cache Abstraction Tutorial The Persistence Layer with Spring 3.1 and JPA JBoss 4.2.x Spring 3 JPA Hibernate Tutorial GWT Spring and Hibernate enter the world of Data Grids...

Java 7: Project Coin in code examples

This blog introduces – by code examples – some new Java 7 features summarized under the term Project Coin. The goal of Project Coin is to add a set of small language changes to JDK 7. These changes do simplify the Java language syntax. Less typing, cleaner code, happy developer ;-) Let’s look into that. Prerequisites Install Java 7 SDK on your machine Install Eclipse Indigo 3.7.1 You need to look out for the correct bundles for your operating system. In your Eclipse workspace you need to define the installed Java 7 JDK in your runtime. In the Workbench go to Window > Preferences > Java > Installed JREs and add your Java 7 home directory.Next you need to set the compiler level to 1.7 in Java > Compiler.Project Coin Improved literalsA literal is the source code representation of a fixed value. “In Java SE 7 and later, any number of underscore characters (_) can appear anywhere between digits in a numerical literal. This feature enables you to separate groups of digits in numeric literals, which can improve the readability of your code.” (from the Java Tutorials) public class LiteralsExample { public static void main(String[] args) { System.out.println("With underscores: "); long creditCardNumber = 1234_5678_9012_3456L; long bytes = 0b11010010_01101001_10010100_10010010; System.out.println(creditCardNumber); System.out.println(bytes); System.out.println("Without underscores: "); creditCardNumber = 1234567890123456L; bytes = 0b11010010011010011001010010010010; System.out.println(creditCardNumber); System.out.println(bytes); } }Notice the underscores in the literals (e.g. 1234_5678_9012_3456L). Results written to the console: With underscores: 1234567890123456 -764832622 Without underscores: 1234567890123456 -764832622As you can see, the underscores do not make a difference to the values. They are just used to make the code more readible. SafeVarargs Pre-JDK 7, you always got an unchecked warning when calling certain varargs library methods. Without the new @SafeVarargs annotation this example would create unchecked warnings. public class SafeVarargsExample { @SafeVarargs static void m(List<string>... stringLists) { Object[] array = stringLists; List<integer> tmpList = Arrays.asList(42); array[0] = tmpList; // compiles without warnings String s = stringLists[0].get(0); // ClassCastException at runtime } public static void main(String[] args) { m(new ArrayList<string>()); } }</string></integer></string>The new annotation in line 3 does not help to get around the annoying ClassCastException at runtime. Also, it can only be applied to static and final methods. Therefore, I believe it will not be a great help. Future versions of Java will have compile time errors for unsafe code like the one in the example above. Diamond In Java 6 it required some patience to create, say, list of maps. Look at this example: public class DiamondJava6Example { public static void main(String[] args) { List<Map<Date, String>> listOfMaps = new ArrayList<Map<Date, String>>(); // type information twice! HashMap<Date, String> aMap = new HashMap<Date, String>(); // type information twice aMap.put(new Date(), "Hello"); listOfMaps.add(aMap); System.out.println(listOfMaps); } }As you can see in the right part of the assignment in lines 3 and 4 you need to repeat your type information for the listOfMaps variable as well as of the aMap variable. This isn’t necessary anymore in Java 7: public class DiamondJava7Example { public static void main(String[] args) { List<Map<Date, String>> listOfMaps = new ArrayList<>(); // type information once! HashMap<Date, String> aMap = new HashMap<>(); // type information once! aMap.put(new Date(), "Hello"); listOfMaps.add(aMap); System.out.println(listOfMaps); } }Multicatch In Java 7 you do not need a catch clause for every single exception, you can catch multiple exceptions in one clause. You remember code like this: public class HandleExceptionsJava6Example { public static void main(String[] args) { Class string; try { string = Class.forName("java.lang.String"); string.getMethod("length").invoke("test"); } catch (ClassNotFoundException e) { // do something } catch (IllegalAccessException e) { // do the same !! } catch (IllegalArgumentException e) { // do the same !! } catch (InvocationTargetException e) { // yeah, well, again: do the same! } catch (NoSuchMethodException e) { // ... } catch (SecurityException e) { // ... } } }Since Java 7 you can write it like this, which makes our lives a lot easier: public class HandleExceptionsJava7ExampleMultiCatch { public static void main(String[] args) { try { Class string = Class.forName("java.lang.String"); string.getMethod("length").invoke("test"); } catch (ClassNotFoundException | IllegalAccessException | IllegalArgumentException | InvocationTargetException | NoSuchMethodException | SecurityException e) { // do something, and only write it once!!! } } }String in switch statements Since Java 7 one can use string variables in switch clauses. Here is an example: public class StringInSwitch { public void printMonth(String month) { switch (month) { case "April": case "June": case "September": case "November": case "January": case "March": case "May": case "July": case "August": case "December": default: System.out.println("done!"); } } }Try-with-resource This feature really helps in terms of reducing unexpected runtime execptions. In Java 7 you can use the so called try-with-resource clause that automatically closes all open resources if an exception occurs. Look at the example: import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.OutputStream; public class TryWithResourceExample { public static void main(String[] args) throws FileNotFoundException { // Java 7 try-with-resource String file1 = "TryWithResourceFile.out"; try (OutputStream out = new FileOutputStream(file1)) { out.write("Some silly file content ...".getBytes()); ":-p".charAt(3); } catch (StringIndexOutOfBoundsException | IOException e) { System.out.println("Exception on operating file " + file1 + ": " + e.getMessage()); } // Java 6 style String file2 = "WithoutTryWithResource.out"; OutputStream out = new FileOutputStream(file2); try { out.write("Some silly file content ...".getBytes()); ":-p".charAt(3); } catch (StringIndexOutOfBoundsException | IOException e) { System.out.println("Exception on operating file " + file2 + ": " + e.getMessage()); } // Let's try to operate on the resources File f1 = new File(file1); if (f1.delete()) System.out.println("Successfully deleted: " + file1); else System.out.println("Problems deleting: " + file1); File f2 = new File(file2); if (f2.delete()) System.out.println("Successfully deleted: " + file2); else System.out.println("Problems deleting: " + file2); } }In line 14 the try-with-resource clause is used to open a file that we want to operate on. Then line 16 generates a runtime exception. Notice that I do not explicitly close the resource. This is done automatically when you use try-with-resource. It *isn’t* when you use the Java 6 equivalent shown in lines 21-30. The code will write the following result to the console: Exception on operating file TryWithResourceFile.out: String index out of range: 3 Exception on operating file WithoutTryWithResource.out: String index out of range: 3 Successfully deleted: TryWithResourceFile.out Problems deleting: WithoutTryWithResource.outThat’s it in terms of Project Coin. Very useful stuff in my eyes. Reference: “Java 7: Project Coin in code examples” from our JCG partner Niklas. Related Articles :Java 7 Feature Overview Manipulating Files in Java 7 GC with Automatic Resource Management in Java 7 Java 7: try-with-resources explained Java SE 7, 8, 9 – Moving Java Forward...

Why I Like The Verbosity of Java

Java is too verbose, they say. You can find comparisons of Hello World programs that take 2 lines in ruby and 10 lines in Java, and in order to read a file you need 20 lines in Java and just 1 in php. Even though the examples are often exaggerated (for example counting imports), it is true Java programs requires more lines of code. But this is not a bad thing at all. On the contrary – it is something I actually like. In fact, it is not about the verbosity of the language – apart from anonymous classes-insteadof-closures, there is nothing else that the language is too verbose about. It is about the core libraries. So – I like the way the core libraries are written in terms of verbosity. Two examples:take the java.io. package. Reading and writing files, streams, etc. It is a bit hard to graps, and in the beginning you copy-paste long snippets of code to simply read a file. But it forces you to understand the abstraction of streams and readers. Other languages have simply: var contents = readFile("path") Cool, but you are never forced to understand how the I/O management works. What happens if reading fails? Is partial reading of the file sufficient for you? Can you nagivate the file? Should you close resources or they are automatically closed? You don’t need to answer these questions for a hello world program, but you will need to know about them pretty soon. And the less-verbose languages hide them from you and postpone this “abstraction revelation”. the servlet API. At first it looks to have some hairy classes and interfaces. But soon enough you realize how the whole thing works – not only in Java, but the general lifecycle of an http request. Because you need a Servlet object, and request and response objects, and output streams to write to, you understand the whole request-response cycle. I have a personal example here. I’ve been writing PHP for one year (in school). Then one month of Java and servlets made it completely clear for me how the whole thing works. PHP was very easy to use – $_GET['foo'], session_start() and a bunch of HTML in between. So I didn’t bother to understand the underlying mechanics. Java forced me to.You may argue that – fine, it forces you to learn these important concepts and abstractions, but it should also give you an easy way to acomplish things. But if the core libraries themselves had these options, all the tutorials would show these options, and the lower-level APIs would be forgotten. So the solution is – 3rd party libraries. Apache and Google give you these. With guava and apache commons you have all these one-liners. FileUtils.readLines(..), Joiner.on(",").join(array), etc. But you don’t start with these libraries, and you learn how things function on a slightly lower level – a level that you will be required to know anyway. Reference: Why I Like The Verbosity of Java  from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog Related Articles :Hate Java? You’re fighting the wrong battle. Selecting a new programming language to learn Writing Code that Doesn’t Suck If I had more time I would have written less code...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: