Featured FREE Whitepapers

What's New Here?


Metrics: Good VS Evil

Pop quiz: What do earned value, burn-down charts, and coverage reports have in common? They are all reporting a metric, sure. But you could have gotten that from the title. Let’s dig deeper. Let’s start with earned value. Earned value is a project tracking system, where you get value points as you’re getting along the project. That means, if you’re 40% done with the task, you get, let’s say, 40 points. This is pre-agile thinking, assuming that we know everything about the task, and nothing can’t change on the way, and therefore those 40 points mean something. We know that we can have a project 90% done, running for 3 years without a single demo to the customer. From experience, the feedback will change everything. But if we believe the metrics, we’re on track. Burn-down charts measure how many story points we’ve got left in our sprint. It can forecast if we’re on track or how many story points we’ll complete. In fact, it assumes that all stories are pretty much the same size. First, the forecast may not tell the true story, if for example the big story doesn’t get completed. And somehow the conclusion is that we need to improve the estimation, rather than cut the story to smaller pieces. Test coverage is another misleading metric. You can have a 100% covered non-working code. You can show increment in coverage on important, risky code, and you can also show drop in safe code. Or you can do the other way around, and get the same numbers, with the misplaced confidence. These metrics, and others like them have a few things in common.They are easy to measure and track They already worked for someone else They are simple enough to understand and misinterpret Once we start to measure them we forget the “what if” scenarios, and return to their “one true meaning”In our ever going search for simplicity, metrics help us minimize a whole uncertain world into a number. We like that. Not only we don’t need to tire our mind with different scenarios, numbers always come with the “engineering” halo. And we can trust those engineers, right? I recently heard that we seem to forget that the I in KPI is indicator. Meaning, it points in some direction, but can also take us off course, if we don’t look around, and understand the environment. Metrics are over-simplification. We should use them like that. They may have a halo, but they can also be pretty devilish.Reference: Metrics: Good VS Evil from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Java Numeric Formatting

I can think of numerous times when I have seen others write unnecessary Java code and I have written unnecessary Java code because of lack of awareness of a JDK class that already provides the desired functionality. One example of this is the writing of time-related constants using hard-coded values such as 60, 24, 1440, and 86400 when TimeUnit provides a better, standardized approach. In this post, I look at another example of a class that provides the functionality I have seen developers often implement on their one: NumberFormat. The NumberFormat class is part of the java.text package, which also includes the frequently used DateFormat and SimpleDateFormat classes. NumberFormat is an abstract class (no public constructor) and instances of its descendants are obtained via overloaded static methods with names such as getInstance(), getCurrencyInstance(), and getPercentInstance(). Currency The next code listing demonstrates calling NumberFormat.getCurrencyInstance(Locale) to get an instance of NumberFormat that presents numbers in a currency-friendly format. Demonstrating NumberFormat’s Currency Support /** * Demonstrate use of a Currency Instance of NumberFormat. */ public void demonstrateCurrency() { writeHeaderToStandardOutput("Currency NumberFormat Examples"); final NumberFormat currencyFormat = NumberFormat.getCurrencyInstance(Locale.US); out.println("15.5 -> " + currencyFormat.format(15.5)); out.println("15.54 -> " + currencyFormat.format(15.54)); out.println("15.345 -> " + currencyFormat.format(15.345)); // rounds to two decimal places printCurrencyDetails(currencyFormat.getCurrency()); }/** * Print out details of provided instance of Currency. * * @param currency Instance of Currency from which details * will be written to standard output. */ public void printCurrencyDetails(final Currency currency) { out.println("Concurrency: " + currency); out.println("\tISO 4217 Currency Code: " + currency.getCurrencyCode()); out.println("\tISO 4217 Numeric Code: " + currency.getNumericCode()); out.println("\tCurrency Display Name: " + currency.getDisplayName(Locale.US)); out.println("\tCurrency Symbol: " + currency.getSymbol(Locale.US)); out.println("\tCurrency Default Fraction Digits: " + currency.getDefaultFractionDigits()); } When the above code is executed, the results are as shown next: ================================================================================== = Currency NumberFormat Examples ================================================================================== 15.5 -> $15.50 15.54 -> $15.54 15.345 -> $15.35 Concurrency: USD ISO 4217 Currency Code: USD ISO 4217 Numeric Code: 840 Currency Display Name: US Dollar Currency Symbol: $ Currency Default Fraction Digits: 2 The above code and associated output demonstrate that the NumberFormat instance used for currency (actually a DecimalFormat), automatically applies the appropriate number of digits and appropriate currency symbol based on the locale. Percentages The next code listings and associated output demonstrate use of NumberFormat to present numbers in percentage-friendly format. Demonstrating NumberFormat’s Percent Format /** * Demonstrate use of a Percent Instance of NumberFormat. */ public void demonstratePercentage() { writeHeaderToStandardOutput("Percentage NumberFormat Examples"); final NumberFormat percentageFormat = NumberFormat.getPercentInstance(Locale.US); out.println("Instance of: " + percentageFormat.getClass().getCanonicalName()); out.println("1 -> " + percentageFormat.format(1)); // will be 0 because truncated to Integer by Integer division out.println("75/100 -> " + percentageFormat.format(75/100)); out.println(".75 -> " + percentageFormat.format(.75)); out.println("75.0/100 -> " + percentageFormat.format(75.0/100)); // will be 0 because truncated to Integer by Integer division out.println("83/93 -> " + percentageFormat.format((83/93))); out.println("93/83 -> " + percentageFormat.format(93/83)); out.println(".5 -> " + percentageFormat.format(.5)); out.println(".912 -> " + percentageFormat.format(.912)); out.println("---- Setting Minimum Fraction Digits to 1:"); percentageFormat.setMinimumFractionDigits(1); out.println("1 -> " + percentageFormat.format(1)); out.println(".75 -> " + percentageFormat.format(.75)); out.println("75.0/100 -> " + percentageFormat.format(75.0/100)); out.println(".912 -> " + percentageFormat.format(.912)); } ================================================================================== = Percentage NumberFormat Examples ================================================================================== 1 -> 100% 75/100 -> 0% .75 -> 75% 75.0/100 -> 75% 83/93 -> 0% 93/83 -> 100% .5 -> 50% .912 -> 91% ---- Setting Minimum Fraction Digits to 1: 1 -> 100.0% .75 -> 75.0% 75.0/100 -> 75.0% .912 -> 91.2% The code and output of the percent NumberFormat usage demonstrate that by default the instance of NumberFormat (actually a DecimalFormat in this case) returned by NumberFormat.getPercentInstance(Locale) method has no fractional digits, multiplies the provided number by 100 (assumes that it is the decimal equivalent of a percentage when provided), and adds a percentage sign (%). Integers The small amount of code shown next and its associated output demonstrate use of NumberFormat to present numbers in integral format. Demonstrating NumberFormat’s Integer Format /** * Demonstrate use of an Integer Instance of NumberFormat. */ public void demonstrateInteger() { writeHeaderToStandardOutput("Integer NumberFormat Examples"); final NumberFormat integerFormat = NumberFormat.getIntegerInstance(Locale.US); out.println("7.65 -> " + integerFormat.format(7.65)); out.println("7.5 -> " + integerFormat.format(7.5)); out.println("7.49 -> " + integerFormat.format(7.49)); out.println("-23.23 -> " + integerFormat.format(-23.23)); } ================================================================================== = Integer NumberFormat Examples ================================================================================== 7.65 -> 8 7.5 -> 8 7.49 -> 7 -23.23 -> -23 As demonstrated in the above code and associated output, the NumberFormat method getIntegerInstance(Locale) returns an instance that presents provided numerals as integers. Fixed Digits The next code listing and associated output demonstrate using NumberFormat to print fixed-point representation of floating-point numbers. In other words, this use of NumberFormat allows one to represent a number with an exactly prescribed number of digits to the left of the decimal point (“integer” digits) and to the right of the decimal point (“fraction” digits). Demonstrating NumberFormat for Fixed-Point Numbers /** * Demonstrate generic NumberFormat instance with rounding mode, * maximum fraction digits, and minimum integer digits specified. */ public void demonstrateNumberFormat() { writeHeaderToStandardOutput("NumberFormat Fixed-Point Examples"); final NumberFormat numberFormat = NumberFormat.getNumberInstance(); numberFormat.setRoundingMode(RoundingMode.HALF_UP); numberFormat.setMaximumFractionDigits(2); numberFormat.setMinimumIntegerDigits(1); out.println(numberFormat.format(234.234567)); out.println(numberFormat.format(1)); out.println(numberFormat.format(.234567)); out.println(numberFormat.format(.349)); out.println(numberFormat.format(.3499)); out.println(numberFormat.format(0.9999)); } ================================================================================== = NumberFormat Fixed-Point Examples ================================================================================== 234.23 1 0.23 0.34 0.35 1 The above code and associated output demonstrate the fine-grain control of the minimum number of “integer” digits to represent to the left of the decimal place (at least one, so zero shows up when applicable) and the maximum number of “fraction” digits to the right of the decimal point. Although not shown, the maximum number of integer digits and minimum number of fraction digits can also be specified. Conclusion I have used this post to look at how NumberFormat can be used to present numbers in different ways (currency, percentage, integer, fixed number of decimal points, etc.) and often means no or reduced code need be written to massage numbers into these formats. When I first began writing this post, I envisioned including examples and discussion on the direct descendants of NumberFormat (DecimalFormat and ChoiceFormat), but have decided this post is already sufficiently lengthy. I may write about these descendants of NumberFormat in future blog posts.Reference: Java Numeric Formatting from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Programming Language Job Trends Part 1 – August 2014

It is time for the August edition of the programming language job trends! The response to the language list changes was definitely positive, so things will be stable for this edition. In Part 1, we look at Java, C++, C#, Objective C, and Visual Basic. I did look at the trends for Swift, but the demand is not high enough yet. Part 2 (PHP, Python, JavaScript, and others) and Part 3 (Erlang, Groovy, Scala, and others) of the job trends will be posted in the next few days as well. First, we look at the job trends from Indeed.com:      As you can see, there is a definite negative trend over the past three years. Java continues to lead, but its demand is less than half of what it was at its peak in 2009. C++ and C# are following the same trend since 2010, which is a steady decline. I finally determined the appropriate search for Visual Basic, and it shows the clear decline over the entirety of the graph. In this installment, Visual Basic demand finally dips below Objective C. Interestingly, Objective C demand stays fairly flat over the past year. Given this, it makes you wonder whether mobile demand is really that high, or whether it is only replacing some of the application development. Normally, I would look at the short term trends from SimplyHired, but the searches are not working correctly, especially for C++ and C#. As opposed to previous posts, SimplyHired’s data is much more current, but still not useful for analysis purposes. Lastly, here is a review of the relative growth from Indeed:Objective C continues to dominate the relative growth, but it still has a declining trend for over a year. The other languages in this analysis are all seeing a negative trend now, with C# just going negative in the past few months. Visual Basic is showing the largest decline, probably over 80%. Obviously, there are a number of reasons for this type of decline among this group of languages. It is difficult to assess whether mobile development affects these trends, especially because Java and Objective C are the main languages for native Android and iOS apps. The rise of alternative languages like Scala or Clojure could be contributing to this, as well as the rise of data science and machine learning. Breadth of languages being used, including mature languages like Python and Ruby, could also be affecting these trends and we will look at these in the next few days.Reference: Programming Language Job Trends Part 1 – August 2014 from our JCG partner Rob Diana at the Regular Geek blog....

Java Concurrency Tutorial – Thread-safe designs

After reviewing what the main risks are when dealing with concurrent programs (like atomicity or visibility), we will go through some class designs that will help us prevent the aforementioned bugs. Some of these designs result in the construction of thread-safe objects, allowing us to share them safely between threads. As an example, we will consider immutable and stateless objects. Other designs will prevent different threads from modifying the same data, like thread-local variables. You can see all the source code at github.     1. Immutable objects Immutable objects have a state (have data which represent the object’s state), but it is built upon construction, and once the object is instantiated, the state cannot be modified. Although threads may interleave, the object has only one possible state. Since all fields are read-only, not a single thread will be able to change object’s data. For this reason, an immutable object is inherently thread-safe. Product shows an example of an immutable class. It builds all its data during construction and none of its fields are modifiable: public final class Product { private final String id; private final String name; private final double price; public Product(String id, String name, double price) { this.id = id; this.name = name; this.price = price; } public String getId() { return this.id; } public String getName() { return this.name; } public double getPrice() { return this.price; } public String toString() { return new StringBuilder(this.id).append("-").append(this.name) .append(" (").append(this.price).append(")").toString(); } public boolean equals(Object x) { if (this == x) return true; if (x == null) return false; if (this.getClass() != x.getClass()) return false; Product that = (Product) x; if (!this.id.equals(that.id)) return false; if (!this.name.equals(that.name)) return false; if (this.price != that.price) return false; return true; } public int hashCode() { int hash = 17; hash = 31 * hash + this.getId().hashCode(); hash = 31 * hash + this.getName().hashCode(); hash = 31 * hash + ((Double) this.getPrice()).hashCode(); return hash; } } In some cases, it won’t be sufficient to make a field final. For example, MutableProduct class is not immutable although all fields are final: public final class MutableProduct { private final String id; private final String name; private final double price; private final List<String> categories = new ArrayList<>(); public MutableProduct(String id, String name, double price) { this.id = id; this.name = name; this.price = price; this.categories.add("A"); this.categories.add("B"); this.categories.add("C"); } public String getId() { return this.id; } public String getName() { return this.name; } public double getPrice() { return this.price; } public List<String> getCategories() { return this.categories; } public List<String> getCategoriesUnmodifiable() { return Collections.unmodifiableList(categories); } public String toString() { return new StringBuilder(this.id).append("-").append(this.name) .append(" (").append(this.price).append(")").toString(); } } Why is the above class not immutable? The reason is we let a reference to escape from the scope of its class. The field ‘categories‘ is a mutable reference, so after returning it, the client could modify it. In order to show this, consider the following program: public static void main(String[] args) { MutableProduct p = new MutableProduct("1", "a product", 43.00); System.out.println("Product categories"); for (String c : p.getCategories()) System.out.println(c); p.getCategories().remove(0); System.out.println("\nModified Product categories"); for (String c : p.getCategories()) System.out.println(c); } And the console output: Product categoriesABCModified Product categoriesBC Since categories field is mutable and it escaped the object’s scope, the client has modified the categories list. The product, which was supposed to be immutable, has been modified, leading to a new state. If you want to expose the content of the list, you could use an unmodifiable view of the list: public List<String> getCategoriesUnmodifiable() { return Collections.unmodifiableList(categories); } 2. Stateless objects Stateless objects are similar to immutable objects but in this case, they do not have a state, not even one. When an object is stateless it does not have to remember any data between invocations. Since there is no state to modify, one thread will not be able to affect the result of another thread invoking the object’s operations. For this reason, a stateless class is inherently thread-safe. ProductHandler is an example of this type of objects. It contains several operations over Product objects and it does not store any data between invocations. The result of an operation does not depend on previous invocations or any stored data: public class ProductHandler { private static final int DISCOUNT = 90; public Product applyDiscount(Product p) { double finalPrice = p.getPrice() * DISCOUNT / 100; return new Product(p.getId(), p.getName(), finalPrice); } public double sumCart(List<Product> cart) { double total = 0.0; for (Product p : cart.toArray(new Product[0])) total += p.getPrice(); return total; } } In its sumCart method, the ProductHandler converts the product list to an array since for-each loop uses an iterator internally to iterate through its elements. List iterators are not thread-safe and could throw a ConcurrentModificationException if modified during iteration. Depending on your needs, you might choose a different strategy. 3. Thread-local variables Thread-local variables are those variables defined within the scope of a thread. No other threads will see nor modify them. The first type is local variables. In the below example, the total variable is stored in the thread’s stack: public double sumCart(List<Product> cart) { double total = 0.0; for (Product p : cart.toArray(new Product[0])) total += p.getPrice(); return total; } Just take into account that if instead of a primitive you define a reference and return it, it will escape its scope. You may not know where the returned reference is stored. The code that calls sumCartmethod could store it in a static field and allow it being shared between different threads. The second type is ThreadLocal class. This class provides a storage independent for each thread. Values stored into an instance of ThreadLocal are accessible from any code within the same thread. The ClientRequestId class shows an example of ThreadLocal usage: public class ClientRequestId { private static final ThreadLocal<String> id = new ThreadLocal<String>() { @Override protected String initialValue() { return UUID.randomUUID().toString(); } }; public static String get() { return id.get(); } } The ProductHandlerThreadLocal class uses ClientRequestId to return the same generated id within the same thread: public class ProductHandlerThreadLocal { //Same methods as in ProductHandler class public String generateOrderId() { return ClientRequestId.get(); } } If you execute the main method, the console output will show different ids for each thread. As an example: T1 - 23dccaa2-8f34-43ec-bbfa-01cec5df3258T2 - 936d0d9d-b507-46c0-a264-4b51ac3f527dT2 - 936d0d9d-b507-46c0-a264-4b51ac3f527dT3 - 126b8359-3bcc-46b9-859a-d305aff22c7e... If you are going to use ThreadLocal, you should care about some of the risks of using it when threads are pooled (like in application servers). You could end up with memory leaks or information leaking between requests. I won’t extend myself in this subject since the post How to shoot yourself in foot with ThreadLocals explains well how this can happen. 4. Using synchronization Another way of providing thread-safe access to objects is through synchronization. If we synchronize all accesses to a reference, only a single thread will access it at a given time. We will discuss this on further posts. 5. Conclusion We have seen several techniques that help us build simpler objects that can be shared safely between threads. It is much harder to prevent concurrent bugs if an object can have multiple states. On the other hand, if an object can have only one state or none, we won’t have to worry about different threads accessing it at the same time.Reference: Java Concurrency Tutorial – Thread-safe designs from our JCG partner Xavier Padro at the Xavier Padró’s Blog blog....

Integrating jOOQ with PostgreSQL: Partitioning

Introduction jOOQ is a great framework when you want to work with SQL in Java without having too much ORM in your way. At the same time, it can be integrated into many environments as it is offering you support for many database-specific features. One such database-specific feature is partitioning in PostgreSQL. Partitioning in PostgreSQL is mainly used for performance reasons because it can improve query performance in certain situations. jOOQ has no explicit support for this feature but it can be integrated quite easily as we will show you.This article is brought to you by the Germany based jOOQ integration partner UWS Software Service (UWS). UWS is specialised in custom software development, application modernisation and outsourcing with a distinct focus on the Java Enterprise ecosystem. Partitioning in PostgreSQL With the partitioning feature of PostgreSQL you have the possibility of splitting data that would form a huge table into multiple separate tables. Each of the partitions is a normal table which inherits its columns and constraints from a parent table. This so-called table inheritance can be used for “range partitioning” where, for example, the data from one range does not overlap the data from another range in terms of identifiers, dates or other criteria. Like in the following example, you can have partitioning for a table “author” that shares the same foreign-key of a table “authorgroup” in all its rows. CREATE TABLE author ( authorgroup_id int, LastName varchar(255) );CREATE TABLE author_1 ( CONSTRAINT authorgroup_id_check_1 CHECK ((authorgroup_id = 1)) ) INHERITS (author);CREATE TABLE author_2 ( CONSTRAINT authorgroup_id_check_2 CHECK ((authorgroup_id = 2)) ) INHERITS (author);... As you can see, we set up inheritance and – in order to have a simple example – we just put one constraint checking that the partitions have the same “authorgroup_id”. Basically, this results in the “author” table only containing table and column definitions, but no data. However, when querying the “author” table, PostgreSQL will really query all the inheriting “author_n” tables returning a combined result. A trivial approach to using jOOQ with partitioning In order to work with the partitioning described above, jOOQ offers several options. You can use the default way which is to let jOOQ generate one class per table. In order to insert data into multiple tables, you would have to use different classes. This approach is used in the following snippet: // add InsertQuery query1 = dsl.insertQuery(AUTHOR_1); query1.addValue(AUTHOR_1.ID, 1); query1.addValue(AUTHOR_1.LAST_NAME, "Nowak"); query1.execute();InsertQuery query2 = dsl.insertQuery(AUTHOR_2); query2.addValue(AUTHOR_2.ID, 1); query2.addValue(AUTHOR_2.LAST_NAME, "Nowak"); query2.execute();// select Assert.assertTrue(dsl .selectFrom(AUTHOR_1) .where(AUTHOR_1.LAST_NAME.eq("Nowak")) .fetch().size() == 1);Assert.assertTrue(dsl .selectFrom(AUTHOR_2) .where(AUTHOR_2.LAST_NAME.eq("Nowak")) .fetch().size() == 1); You can see that multiple classes generated by jOOQ need to be used, so depending on how many partitions you have, generated classes can pollute your codebase. Also, imagine that you eventually need to iterate over partitions, which would be cumbersome to do with this approach. Another approach could be that you use jOOQ to build fields and tables using string manipulation but that is error prone again and prevents support for generic type safety. Also, consider the case where you want true data separation in terms of multi-tenancy. You see that there are some considerations to do when working with partitioning. Fortunately jOOQ offers various ways of working with partitioned tables, and in the following we’ll compare approaches, so that you can choose the one most suitable for you. Using jOOQ with partitioning and multi-tenancy JOOQ’s runtime-schema mapping is often used to realize database environments, such that for example during development, one database is queried but when deployed to production, the queries are going to another database. Multi-tenancy is another recommended use case for runtime-schema mapping as it allows for strict partitioning and for configuring your application to only use databases or tables being configured in the runtime-schema mapping. So running the same code would result in working with different databases or tables depending on the configuration, which allows for true separation of data in terms of multi-tenancy. The following configuration taken from the jOOQ documentation is executed when creating the DSLContext so it can be considered a system-wide setting: Settings settings = new Settings() .withRenderMapping(new RenderMapping() .withSchemata( new MappedSchema().withInput("DEV") .withOutput("MY_BOOK_WORLD") .withTables( new MappedTable().withInput("AUTHOR") .withOutput("AUTHOR_1"))));// Add the settings to the Configuration DSLContext create = DSL.using( connection, SQLDialect.ORACLE, settings);// Run queries with the "mapped" configuration create.selectFrom(AUTHOR).fetch();// results in SQL: // “SELECT * FROM MY_BOOK_WORLD.AUTHOR_1” Using this approach you can map one table to one partition permanently eg. “AUTHOR” to “AUTHOR_1” for environment “DEV”. In another environment you could choose to map “AUTHOR” table to “AUTHOR_2”. Runtime-schema mapping only allows you to map to exactly one table on a per-query basis, so you could not handle the use case where you would want to manipulate more than one table partition. If you would like to have more flexibility you might want to consider the next approach. Using jOOQ with partitioning and without multi-tenancy If you need to handle multiple table partitions without having multi-tenancy, you need a more flexible way of accessing partitions. The following example shows how you can do it in a dynamic and type safe way, avoiding errors and being usable in the same elegant way you are used to by jOOQ: // add for(int i=1; i<=2; i++) { Builder part = forPartition(i); InsertQuery query = dsl.insertQuery(part.table(AUTHOR)); query.addValue(part.field(AUTHOR.ID), 1); query.addValue(part.field(AUTHOR.LAST_NAME), "Nowak"); query.execute(); }// selectfor(int i=1; i<=2; i++) { Builder part = forPartition(i); Assert.assertTrue(dsl .selectFrom(part.table(AUTHOR)) .where(part.field(AUTHOR.LAST_NAME).eq("Nowak")) .fetch() .size() == 1); } What you can see above is that the partition numbers are abstracted away so that you can use “AUTHOR” table instead of “AUTHOR_1”. Thus, your code won’t be polluted with many generated classes. Another thing is that the the partitioner object is initialized dynamically so you can use it for example in a loop like above. Also it follows the Builder pattern so that you can operate on it like you are used to by jOOQ. The code above is doing exactly the same as the first trivial snippet, but there are multiple benefits like type safe and reusable access to partitioned tables. Integration of jOOQ partitioning without multi-tenancy into a Maven build process (optional) If you are using Continuous-Integration you can integrate the solution above so that jOOQ is not generating tables for the partitioned tables. This can be achieved using a regular expression that excludes certain table names when generating Java classes. When using Maven, your integration might look something like this: <generator> <name>org.jooq.util.DefaultGenerator</name> <database> <name>org.jooq.util.postgres.PostgresDatabase</name> <includes>.*</includes> <excludes>.*_[0-9]+</excludes> <inputSchema>${db.schema}</inputSchema> </database> <target> <packageName>com.your.company.jooq</packageName> <directory>target/generated-sources/jooq</directory> </target> </generator> Then it’s just calling mvn install and jOOQ maven plugin will be generating the database schema in compilation time. Integrating jOOQ with PostgreSQL: Partitioning This article described how jOOQ in combination with the partitioning feature of PostgreSQL can be used to implement multi-tenancy and improve database performance. PostgreSQL’s documentation states that for partitioning “the benefits will normally be worthwhile only when a table would otherwise be very large. The exact point at which a table will benefit from partitioning depends on the application, although a rule of thumb is that the size of the table should exceed the physical memory of the database server.” Achieving support for partitioning with jOOQ is as easy as adding configuration or a small utility class, jOOQ is then able to support partitioning with or without multi-tenancy and without sacrificing type safety. Apart from Java-level integration, the described solution also smoothly integrates into your build and test process. You may want to look at the sources of the partitioner utility class which also includes a test-class so that you can see the behavior and integration in more detail. Please let us know if you need support for this or other jOOQ integrations within your environment. UWS Software Service (UWS) is an official jOOQ integration partner.Reference: Integrating jOOQ with PostgreSQL: Partitioning from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Feature Toggles (Feature Switches or Feature Flags) vs Feature Branches

Feature Branches If you are using branches, you are not doing Continuous Integration / Deployment / Delivery! You might have great Code Coverage with unit tests, you might be doing TDD, you might have functional and integrations tests written in BDD format and you might run all of them on every commit to the repository. However, if you are having branches, integration is delayed until they are merged and that means that there is no continuous integration.   People tend to like feature branches. They provide flexibility to decide what to release and when. All there is to do is to merge features that will be released into the main branch and deploy to production. The problem with this approach is the delay. Integration is delayed until the merge. We cannot know whether all those separately developed features work well together until the merge is done and all the unit, functional, integration and manual tests are run. On top of that, there is the problem caused with the pain of merge itself. Days, weeks or even months of work are suddenly merged into the main branch and from there to all not-yet-to-be-released branches. The idea of Continuous Integration is to detect problems as soon as possible. Minimum delay until problems are found. The most common way to mitigate this problem is to have feature branches small and short-lived. They can be merged into the main branch after only a day or two. However, in many cases this is simply not possible. Feature might be too big to be developed in only few days. Business might require us to release it only in conjunction with other features. In other words, feature branches are great for allowing us to decide what to release and when. They provide us the flexibility but they prevent us from doing Continuous Integration. If there would be only one branch, we could do continuous integration but the release problems that were solved with feature branches would come back to haunt us. Feature Toggles Feature Toggles (sometimes called Feature Switches or Feature Flags) should solve the need to be able to deploy only selected set of features while keeping only one (main) branch. With them we can do all the work in one branch, have continuous integration taking care of the quality of our code and use flags to turn features off until they are ready to be released. We can have all the benefits of Continuous Integration together with the flexibility to choose which features will be available and which will be hidden. Moreover, it is a step towards Continuous Deployment. If we do have satisfying automated test coverage and can switch features off and on, there is nothing really preventing us from deploying to production each commit that passed all verification. Even if some bug sneaks into the production, with Feature Toggles if would be very easy to turn that feature off until it’s fixed. The basic idea is to have a configuration file that defines toggles for features that are pending to be fully released. Alternative to a configuration file can be a database table. The application should use those toggles to decide whether to make some feature available to the users or not. They might even be used to display some functionality to a subset of users based on their role, geographic location or random sample. There are only few rules to be followed:Use toggles only until they are fully deployed and proven to work. Otherwise, you might end up with “spaghetti code” full of if/else statements containing old toggles that are not in use any more. Do not spend too much time testing toggles. It is in most cases enough to confirm that the entry point into some new feature is not visible. That can be, for example, link to that new feature. Do not overuse toggles. Do not use them when there is no need for them. For example, you might be developing a new screen that is accessible through the link in the home page. If that link is added at the end, there might be no need to have toggle that hides it.Examples There are many libraries that provide the solution for Feature Toggles. However, implementing them is so easy that you might choose to do it yourself. Here are few examples of possible implementations of Feature Toggles. They use AngularJS but the logic behind them could be applied to any other language or framework. [JSON with Feature Toggles] { "feature1": { "displayed": false }, "feature2": { "displayed": true }, "feature3": { "displayed": false } } I tend to have more values in my toggles JSON. Some of them as disabled, description, allowed_users, etc. The above example is only the bare bones minimal solution. Next, we should load the JSON into, in this example, AngularJS scope. [AngularJS that loads Feature Toggles into the scope] $http.get('/api/v1/data/features').then(function(response) { $scope.features = response.data; }); Once Feature Toggles are in the scope, the rest is fairly easy. Following AngularJS example hides the feature when displayed is set to false. [AngularJS HTML that hides some element] <div ng-show="features.feature1.displayed"> <!--Feature HTML--> </div> In some cases, you might be working on a completely new screen that will replace the old one. In that scenario something like the following might be the solution. [AngularJS that returns URL depending on Feature Toggle] $scope.getMyUrl = function() { if($scope.features.feature2.displayed) { return 'myNewUrl'; } else { return 'myOldUrl'; } } Then, from the HTML, it would be something like the example below. [AngularJS HTML that toggles URL] <a href="{{getMyUrl()}}"> This link leads somewhere </a> In some other cases change might be on the server. Following REST API best practices, you’d create a new API version and use Feature Toggle to decide which one to use. Code could be like following. [AngularJS that performs request that depends on Feature Toggle] var apiUrl; if($scope.features.feature2.displayed) { apiUrl = '/api/v2/myFancyFeature'; } else { apiUrl = '/api/v1/myFancyFeature'; } $http.get(apiUrl).then(function(response) { // Do something with the response }); Same logic applied to the front-end could be applied to the back-end or any other type of the application. In most cases it boils down to simple if/else statements.Reference: Feature Toggles (Feature Switches or Feature Flags) vs Feature Branches from our JCG partner Viktor Farcic at the Technology conversations blog....

Monitoring and Filtering Application Log to Mail with log4j

In today’s post I’m going to show you how to filter log statements into a warning email. This came out of a necessity to monitor a few critical points of one application I was working on. There are tools that you can use to perform application monitoring. I’m not going into details about those tools, but sometimes it’s just easier to have the application send a warning email. I mostly use log4j for my logging requirements. Unfortunately, since there are so many logging frameworks in the Java ecosystem this post only covers a piece of it. I might do something for the others in the future, but I would like to reinforce a old post of António Gonçalves about standardize a logging API: I need you for Logging API Spec Lead !. The sample covered here is for log4j, but the github project also contains a log4j2 sample. Use Case To give a little more detail, I want to be informed when the application generates erros, but also ignore erros that are already handled by the application itself. For a more concrete example, I had a case where a database insert could generate a constraint violation exception, but this error was handled specifically by the application. Even so, the JDBC driver logs the exception. For this case, I was not interested in getting a notification. Setting up SMTPAppender Anyway, looking into log4j, you can create an appender that sends all your log to the email, just check SMTPAppender. It looks like this: log4j-SMTPAppender <appender name="SMTP" class="org.apache.log4j.net.SMTPAppender"> <errorHandler class="org.apache.log4j.helpers.OnlyOnceErrorHandler"/><param name="Threshold" value="ERROR"/> <param name="To" value="someone@somemail.com"/> <param name="From" value="someonelse@somemail.com"/> <param name="Subject" value="Log Errors"/> <param name="SMTPHost" value="smtp.somemail.com"/> <param name="SMTPUsername" value="username"/> <param name="SMTPPassword" value="password"/> <param name="BufferSize" value="1"/> <param name="SMTPDebug" value="true"/><layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} %-5p [%c{1}:%L] %m%n"/> </layout> </appender> Filtering Our filtering needs are not available in the standard log4j lib. You need to use the log4j-extras which provide you with ExpressionFilter that supports filtering of complex expressions. We are also using StringMatchFilter from the regular log4j lib. Now, we can add a triggeringPolicy to the SMTPAppender: log4j-triggeringPolicy <triggeringPolicy class="org.apache.log4j.rolling.FilterBasedTriggeringPolicy"> <filter class="org.apache.log4j.varia.StringMatchFilter"> <param name="StringToMatch" value="ERROR01"/> <param name="AcceptOnMatch" value="false"/> </filter><filter class="org.apache.log4j.filter.ExpressionFilter"> <param name="expression" value="CLASS LIKE .*Log4jExpressionFilter.*"/> <param name="acceptOnMatch" value="false"/> </filter><filter class="org.apache.log4j.filter.LevelRangeFilter"> <param name="levelMin" value="ERROR"/> <param name="levelMax" value="FATAL"/> </filter> </triggeringPolicy> This configuration will filter the log to email only the ERROR and FATAL thresholds which are NOT logged in classes with Log4jExpressionFilter in it’s name and DON’T have ERROR01 in the log message. Have a look into LoggingEventFieldResolver to see which others expressions you can use with ExpressionFilter. You can use EXCEPTION, METHOD and a few others which are very useful. Testing Testing a SMTPAppender is not easy if you rely on real servers. Fortunately, you can use mock-javamail and you don’t even have to worry about polluting a SMTP server. This is also included in the github project. Resources You can clone a full working copy from my github repository for log4j and log4j2. Log4j Mail Filter Since I may modify the code in the future, you can download the original source of this post from the release 1.0. In alternative, clone the repo, and checkout the tag from release 1.0 with the following command: git checkout 1.0.Reference: Monitoring and Filtering Application Log to Mail with log4j from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....

Parameterized Test Runner in JUnit

We all have written unit tests where in a single test tests for different possible input-output combinations. Lets look how its done by taking a simple fibonacci series example. The below code computes the fibonacci series for number of elements mentioned:             import java.math.BigInteger; import java.util.ArrayList; import java.util.List;public class Fibonacci{public List<Integer> getFiboSeries(int numberOfElements) { List<Integer> fiboSeries = new ArrayList<>(numberOfElements); for (int i = 0; i < numberOfElements; i++) { //First 2 elements are 1,1 if (i == 0 || i == 1) { fiboSeries.add(i, 1); } else { int firstPrev = fiboSeries.get(i - 2); int secondPrev = fiboSeries.get(i - 1); int fiboElement = firstPrev + secondPrev; fiboSeries.add(i, fiboElement); } } return fiboSeries; }} Lets see the conventional way of testing the above code with multiple input values import java.util.List; import org.junit.Test; import java.util.Arrays; import static org.junit.Assert.*;public class FibonacciCachedTest {/** * Test of getFiboSeries method, of class Fibonacci. */ @Test public void testGetFiboSeries() { System.out.println("getFiboSeries"); int numberOfElements = 5; Fibonacci instance = new Fibonacci(); List<Integer> expResult = Arrays.asList(1, 1, 2, 3, 5); List<Integer> result = instance.getFiboSeries(numberOfElements); assertEquals(expResult, result);numberOfElements = 10; expResult = Arrays.asList(1, 1, 2, 3, 5, 8, 13, 21, 34, 55); result = instance.getFiboSeries(numberOfElements); assertEquals(expResult, result);} } So we have been able to test for 2 inputs, imagine extending the above for more number of inputs? Unnecessary bloat up in the test code. JUnit provides a different Runner called Parameterized runner which exposes a static method annotated with @Parameters. This method has to be implemented to return the inputs and expected output collection which will be used to run the test defined in the class. Lets look at the code which does this: import java.util.Arrays; import java.util.Collection; import java.util.List; import static org.junit.Assert.assertEquals; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized;@RunWith(Parameterized.class) public class ParametrizedFiboTest {private final int number; private final List<Integer> values;public ParametrizedFiboTest(FiboInput input) { this.number = input.number; this.values = input.values; }@Parameterized.Parameters public static Collection<Object[]> fiboData() { return Arrays.asList(new Object[][]{ {new FiboInput(1, Arrays.asList(1))}, {new FiboInput(2, Arrays.asList(1, 1))}, {new FiboInput(3, Arrays.asList(1, 1, 2))}, {new FiboInput(4, Arrays.asList(1, 1, 2, 3))}, {new FiboInput(5, Arrays.asList(1, 1, 2, 3, 5))}, {new FiboInput(6, Arrays.asList(1, 1, 2, 3, 5, 8))} }); }@Test public void testGetFiboSeries() { FibonacciUncached instance = new FibonacciUncached(); List<Integer> result = instance.getFiboSeries(this.number); assertEquals(this.values, result); }}class FiboInput {public int number; public List<Integer> values;public FiboInput(int number, List<Integer> values) { this.number = number; this.values = values; } } This way we would just need to add a new input and expected output in the fiboData() method to get this working!Reference: Parameterized Test Runner in JUnit from our JCG partner Mohamed Sanaulla at the Experiences Unlimited blog....

How to successfully attack a software dinosaur?

We all have “enjoyed” working with some software that was purchased because “You can’t get fired because you bought…”. This software is known for being the industry leader. Not because it is easy to use, easy to integrate, easy to scale, easy to do anything with,… It often is quite the opposite. So why do people buy it? First of all it is easy to find experts. There are people out there that have been “enjoying” working with this solution for the last 10 years. It is relatively stable and reliable. There is a complete solution for it with hundreds or thousands of partner solutions. People have just given up on trying to convince their bosses on trying something different.   5 steps to disrupt the Dinosaur Step 1: the basic use cases The Pareto rule. What are the 80% of the use cases that only reflect 20% of the functionality. Step 2: the easy & beautiful & horizontally scalable & multi-tenant clone Make a solution that reflects 80% of these use cases but make it beautiful and incredibly easy to use. Use the latest horizontally scalable backends, e.g. Cassandra. Build multi-tenancy into the software from day 1. Step 3: make it open source Release the “improved clone” as an open source product. Step 4: the plugin economy Add a plugin mechanism that allows others to create plugins to fill in the 20% use case gap. Create a marketplace hence others can make money with their plugins. Make money by being the broker. Think App Store but ideally improve the model. Step 5: the SaaS version Create a SaaS version and attack the bottom part of the market. Focus on the enterprises that could never afford the original product. Slowly move upwards towards the top segment. The expected result You will make have a startup or a new business unit that will make money pretty quickly and will soon be the target of a big purchase offer from the Dinosaur or one of its competitors. You will spend a lot less sleepless nights trying to make money this way then via the creation of the next Angry Bird, Uber 0r Facebook clone.Reference: How to successfully attack a software dinosaur? from our JCG partner Maarten Ectors at the Telruptive blog....

Capacity Planning and the Project Portfolio

I was problem-solving with a potential client the other day. They want to manage their project portfolio. They use Jira, so they think they can see everything everyone is doing. (I’m a little skeptical, but, okay.) They want to know how much the teams can do, so they can do capacity planning based on what the teams can do. (Red flag #1) The worst part? They don’t have feature teams. They have component teams: front end, middleware, back end. You might, too. (Red flag #2) Problem #1: They have a very large program, not a series of unrelated projects. They also have projects.   Problem #2: They want to use capacity planning, instead of flowing work through teams. They are setting themselves up to optimize at the lowest level, instead of optimizing at the highest level of the organization. If you read Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects, you understand this problem. A program is a strategic collection of projects where the business value of the all of the projects is greater than any one of the projects itself. Each project has value. Yes. But all together, the program, has much more value. You have to consider the program has a whole. Don’t Predict the Project Portfolio Based on Capacity If you are considering doing capacity planning on what the teams can do based on their estimation or previous capacity, don’t do it. First, you can’t possibly know based on previous data. Why? Because the teams are interconnected in interesting ways. When you have component teams, not feature teams, their interdependencies are significant and unpredictable. Your ability to predict the future based on past velocity? Zero. Nada. Zilch. This is legacy thinking from waterfall. Well, you can try to do it this way. But you will be wrong in many dimensions:You will make mistakes because of prediction based on estimation. Estimates are guesses. When you have teams using relative estimation, you have problems. Your estimates will be off because of the silent interdependencies that arise from component teams. No one can predict these if you have large stories, even if you do awesome program management. The larger the stories, the more your estimates are off. The longer the planning horizon, the more your estimates are off. You will miss all the great ideas for your project portfolio that arise from innovation that you can’t predict in advance. As the teams complete features, and as the product owners realize what the teams do, the teams and the product owners will have innovative ideas. You, the management team, want to be able to capitalize on this feedback.It’s not that estimates are bad. It’s that estimates are off. The more teams you have, the less your estimates are normalized between teams. Your t-shirt sizes are not my Fibonacci numbers, are not that team’s swarming or mobbing. (It doesn’t matter if you have component teams or feature teams for this to be true.) When you have component teams, you have the additional problem of not knowing how the interdependencies affect your estimates. Your estimates will be off, because no one’s estimates take the interdependencies into account. You don’t want to normalize estimates among teams. You want to normalize story size. Once you make story size really small, it doesn’t matter what the estimates are. When you  make the story size really small, the product owners are in charge of the team’s capacity and release dates. Why? Because they are in charge of the backlogs and the roadmaps. The more a program stops trying to estimate at the low level and uses small stories and manages interdependencies at the team level, the more the program has momentum. The part where you gather all the projects? Do that part. You need to see all the work. Yes. that part works and helps the program see where they are going. Use Value for the Project Portfolio Okay, so you try to estimate the value of the features, epics, or themes in the roadmap of the project portfolio. Maybe you even use the cost of delay as Jutta and I suggest in Diving for Hidden Treasures: Finding the Real Value in Your Project Portfolio (yes, this book is still in progress). How will you know if you are correct? You don’t. You see the demos the teams provide, and you reassess on a reasonable time basis. What’s reasonable? Not every week or two. Give the teams a chance to make progress. If people are multitasking, not more often than once every two months, or every quarter. They have to get to each project. Hint: stop the multitasking and you get tons more throughput.Reference: Capacity Planning and the Project Portfolio from our JCG partner Johanna Rothman at the Managing Product Development blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: