Featured FREE Whitepapers

What's New Here?

software-development-2-logo

The next IT revolution: micro-servers and local cloud

Have you ever counted the number of Linux devices at home or work that haven’t been updated since they came out of the factory? Your cable/fibre/ADSL modem, your WiFi point, television sets, NAS storage, routers/bridges, media centres, etc. Typically this class of devices hosts a proprietary hardware platform, an embedded proprietary Linux and a proprietary application. If you are lucky you are able to log into a web GUI often using the admin/admin credentials and upload a new firmware blob. This firmware blob is frequently hard to locate on hardware supplier’s websites. No wonder the NSA and others love to look into potential firmware bugs. They are the ideal source of undetected wiretapping.   The next IT revolution: micro-servers The next IT revolution is about to happen however. Those proprietary hardware platforms will soon give room for commodity multi-core processors from ARM, Intel, etc. General purpose operating systems will replace legacy proprietary and embedded predecessors. Proprietary and static single purpose apps will be replaced by marketplaces and multiple apps running on one device. Security updates will be sent regularly. Devices and apps will be easy to manage remotely. The next revolution will be around managing millions of micro-servers and the apps on top of them. These micro-servers will behave like a mix of phone apps, Docker containers, and cloud servers. Managing them will be like managing a “local cloud” sometimes also called fog computing. Micro-servers and IoT? Are micro-servers some form of Internet of Things. Yes they can be but not all the time. If you have a smarthub that controls your home or office then it is pure IoT. However if you have a router, firewall, fibre modem, micro-antenna station, etc. then the micro-server will just be an improved version of its predecessor. Why should you care about micro-servers? If you are a mobile app developer then the micro-servers revolution will be your next battlefield. Local clouds need “Angry Bird”-like successes. If you are a telecom or network developer then the next-generation of micro-servers will give you unseen potentials to combine traffic shaping with parental control with QoS with security with … If you are a VC then micro-server solution providers is the type of startups you want to invest in. If you are a hardware vendor then this is the type of devices or SoCs you want to build. If you are a Big Data expert then imagine the new data tsunami these devices will generate. If you are a machine learning expert then you might want to look at algorithms and models that are easy to execute on constraint devices once they have been trained on potentially thousands of cloud servers and petabytes of data. If you are a Devop then your next challenge will be managing and operating millions of constraint servers. If you are a cloud innovator then you are likely to want to look into SaaS and PaaS management solutions for micro-servers. If you are a service provider then this is the type of solutions you want to have the capabilities to manage at scale and easily integrate with. If you are a security expert then you should start to think about micro-firewalls, anti-micro-viruses, etc. If you are a business manager then you should think about how new “mega micro-revenue” streams can be obtained or how disruptive “micro- innovations” can give you a competitive advantage. If you are an analyst or consultant then you can start predicting the next IT revolution and the billions the market will be worth in 2020. The next steps… It is still early days but expect some major announcements around micro-servers in the next months…Reference: The next IT revolution: micro-servers and local cloud from our JCG partner Maarten Ectors at the Telruptive blog....
java-logo

Named parameters in Java

Creating a method that has many parameters is a major sin. Whenever there is need to create such a method, sniff in the air: it is code smell. Harden your unit tests and then refactor. No excuse, no buts. Refactor! Use builder pattern or even better use Fluent API. For the latter the annotation processor fluflu may be of great help. Having all that said we may come to a point in our life when we face real life and not the idealistic pattern that we can follow in our hobby projects. There comes the legacy enterprise library monster that has the method of thousands parameters and you do not have the authority, time, courage or interest (bad for you) to modify … ops… refactor it. You could create a builder as a facade that hides the ugly API behind it if you had the time. Creating a builder is still code that you have to unit test even before you write (you know: TDD) and you just may not have the time. The code that calls the monstrous method is also there already, you just maintain it. You can still do some little trick. It may not be perfect, but still something. Assume that there is a method: public void monster(String contactName, String contactId, String street, String district, ... Long pT){ ... } The first thing is to select your local variables at the location of the caller wisely. Pity the names are already chosen and you may not want to change it. There can be some reason for that, for example there is an application wide naming convention followed that may make sense even if not your style. So the call: monster(nm, "05300" + dI, getStrt(), d, ... , z+g % 3L ); is not exactly what I was talking about. That is what you have and you can live with it, or just insert new variables into the code: String contactName = nm; String contactId = "05300" + dI; String street = getStrt(); Street district = d; ... Long pT = z+g % 3L; monster(contactName, contactId, street, district, ... ,pT ); or you can even write it in a way that is not usual in Java, though perfectly legal: String contactName, contactId, street, district; ... Long pT; monster(contactName = nm, contactId = "05300" + dI, street = getStrt(), district = d, ... ,pT = z+g % 3L ); Tasty is it? Depends. I would not argue on taste. If you do not like that, there is an alternative way. You can define auxiliary and very simple static methods: static <T> T contactName(T t){ return T;} static <T> T contactId(T t){ return T;} static <T> T street(T t){ return T;} static <T> T district(T t){ return T;} ... static <T> T pT(T t){ return T;}monster(contactName(nm), contactId("05300" + dI), street(getStrt()(, district(d), ... ,pT(z+g % 3L) ); The code is still ugly but a bit more readable at the place of the caller. You can even collect static methods into a utility class, or to an interface in case of Java 8 named like with, using, to and so on. You can statically import them to your code and have some method call as nice as: doSomething(using(someParameter), with(someOtherParameter), to(resultStore)); When all that is there you can feel honky dory if you answer the final question: what the blessed whatever* is parameter pT. (* “whatever” you can replace with some other words, whichever you like)Reference: Named parameters in Java from our JCG partner Peter Verhas at the Java Deep blog....
agile-logo

Managers Manage Ambiguity

I was thinking about the Glen Alleman’s post, All Things Project Are Probabilistic. In it, he says, Management is Predictionas a inference from Deming. When I read this quote, If you can’t describe what you are doing as a process, you don’t know what you’re doing. –Deming I infer from Deming that managers must manage ambiguity. Here’s where Glen and I agree. Well, I think we agree. I hope I am not putting words into Glen’s mouth. I am sure he will correct me if I am. Managers make decisions based on uncertain data. Some of that data is predictive data. For example, I suggest that people provide, where necessary, order-of-magnitude estimates of projects and programs. Sometimes you need those estimates. Sometimes you don’t. (Yes, I have worked on programs where we didn’t need to estimate. We needed to execute and show progress.) Now, here’s where I suspect Glen and I disagree:Asking people for detailed estimates at the beginning of a project and expecting those estimates to be true for the entire project. First, the estimates are guesses. Second, software is about learning, If you work in an agile way, you want to incorporate learning and change into the project or program. I have some posts about estimation in this blog queue where I discuss this. Using estimation for the project portfolio. I see no point in using estimates instead of value for the project portfolio, especially if you use agile approaches to your projects. If we finish features, we can end the project at any time. We can release it. This makes software different than any other type of project. Why not exploit that difference? Value makes much more sense. You can incorporate cost of delay into value. If you use your estimate as a target, you have some predictable outcomes unless you get lucky: you will shortchange the feature by decreasing scope, incur technical debt, or increase the defects. Or all three.What works for projects is honest status reporting, which traffic lights don’t provide. Demos provide that. Transparency about obstacles provides that. The ability to be honest about how to solve problems and work through issues provides that. Much has changed since I last worked on a DOD project. I’m delighted to see that Glen writes that many government projects are taking more agile approaches. However, if we always work on innovative, new work, we cannot predict with perfect estimation what it will take at the beginning, or even through the project. We can better our estimates as we proceed. We can have a process for our work. Regardless of our approach, as long as we don’t do code-and-fix, we do. (In Manage It! Your Guide to Modern, Pragmatic Project Management, I say to choose an approach based on your context, and to choose any lifecycle except for code-and-fix.) We can refine our estimates, if management needs them. The question is this: why does management need them? For predicting future cost for a customer? Okay, that’s reasonable. Maybe on large programs, you do an estimate every quarter for the next quarter, based on what you completed, as in released, and what’s on the roadmap. You already know what you have done. You know what your challenges were. You can do better estimates. I would even do an EQF for the entire project/program. Nobody has an open spigot of money. But, in my experience, the agile project or program will end before you expect it to. (See the comments on Capacity Planning and the Project Portfolio.) But, the project will only end early if you evaluate features based on value and if you collaborate with your customer. The customer will say, “I have enough now. I don’t need more.” It might occur before the last expected quarter. It might occur before the last expected half-year. That’s the real ambiguity that managers need to manage. Our estimates will not be correct. Technical leaders, project managers and product owners need to manage risks and value so the project stays on track. Managers need to ask the question: What if the project or program ends early? Ambiguity, anyone?Reference: Managers Manage Ambiguity from our JCG partner Johanna Rothman at the Managing Product Development blog....
software-development-2-logo

PL/SQL backtraces for debugging

For many PL/SQL developers, this might be common sense, but for one of our customers, this was an unknown PL/SQL feature: Backtraces. When your application raises an error somewhere deep down in the call stack, you don’t get immediate information about the exact source of the error. For large PL/SQL applications, this can be a pain. One workaround is to keep track of statement numbers that were last executed before any error occurred:         DECLARE v_statement_no := 0; BEGIN v_statement_no := 1; SELECT ...v_statement_no := 2; INSERT ...v_statement_no := 3; ... EXCEPTION WHEN OTHERS THEN -- Log error message somewhere logger.error(module, v_statement_no, sqlerrm); END; The above looks an awful lot like println-debugging, a thing that isn’t really known to Java developers! But println-debugging isn’t necessary in PL/SQL either. Use the DBMS_UTILITY.FORMAT_ERROR_BACKTRACE function, instead! An example: DECLARE PROCEDURE p4 IS BEGIN raise_application_error(-20000, 'Some Error'); END p4; PROCEDURE p3 IS BEGIN p4; END p3; PROCEDURE p2 IS BEGIN p3; END p2; PROCEDURE p1 IS BEGIN p2; END p1;BEGIN p1; EXCEPTION WHEN OTHERS THEN dbms_output.put_line(sqlerrm); dbms_output.put_line( dbms_utility.format_error_backtrace ); END; / The above PL/SQL block generates the following output: ORA-20000: Some Error ORA-06512: at line 3 ORA-06512: at line 6 ORA-06512: at line 9 ORA-06512: at line 12 ORA-06512: at line 16 You can see exactly what line number generated the error. If you’re not using local procedures in anonymous blocks (which you quite likely aren’t), this gets even more useful: CREATE PROCEDURE p4 IS BEGIN raise_application_error(-20000, 'Some Error'); END p4; / CREATE PROCEDURE p3 IS BEGIN p4; END p3; / CREATE PROCEDURE p2 IS BEGIN p3; END p2; / CREATE PROCEDURE p1 IS BEGIN p2; END p1; /BEGIN p1; EXCEPTION WHEN OTHERS THEN dbms_output.put_line(sqlerrm); dbms_output.put_line( dbms_utility.format_error_backtrace ); END; / The above now outputs: ORA-20000: Some Error ORA-06512: at "PLAYGROUND.P4", line 2 ORA-06512: at "PLAYGROUND.P3", line 2 ORA-06512: at "PLAYGROUND.P2", line 2 ORA-06512: at "PLAYGROUND.P1", line 2 ORA-06512: at line 2 To learn more about the DBMS_UTILITY package, please consider the manual. True to the nature of all things called “UTILITY”, it really contains pretty much random things that you wouldn’t expect there!Reference: PL/SQL backtraces for debugging from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
agile-logo

Metrics: Good VS Evil

Pop quiz: What do earned value, burn-down charts, and coverage reports have in common? They are all reporting a metric, sure. But you could have gotten that from the title. Let’s dig deeper. Let’s start with earned value. Earned value is a project tracking system, where you get value points as you’re getting along the project. That means, if you’re 40% done with the task, you get, let’s say, 40 points. This is pre-agile thinking, assuming that we know everything about the task, and nothing can’t change on the way, and therefore those 40 points mean something. We know that we can have a project 90% done, running for 3 years without a single demo to the customer. From experience, the feedback will change everything. But if we believe the metrics, we’re on track. Burn-down charts measure how many story points we’ve got left in our sprint. It can forecast if we’re on track or how many story points we’ll complete. In fact, it assumes that all stories are pretty much the same size. First, the forecast may not tell the true story, if for example the big story doesn’t get completed. And somehow the conclusion is that we need to improve the estimation, rather than cut the story to smaller pieces. Test coverage is another misleading metric. You can have a 100% covered non-working code. You can show increment in coverage on important, risky code, and you can also show drop in safe code. Or you can do the other way around, and get the same numbers, with the misplaced confidence. These metrics, and others like them have a few things in common.They are easy to measure and track They already worked for someone else They are simple enough to understand and misinterpret Once we start to measure them we forget the “what if” scenarios, and return to their “one true meaning”In our ever going search for simplicity, metrics help us minimize a whole uncertain world into a number. We like that. Not only we don’t need to tire our mind with different scenarios, numbers always come with the “engineering” halo. And we can trust those engineers, right? I recently heard that we seem to forget that the I in KPI is indicator. Meaning, it points in some direction, but can also take us off course, if we don’t look around, and understand the environment. Metrics are over-simplification. We should use them like that. They may have a halo, but they can also be pretty devilish.Reference: Metrics: Good VS Evil from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
java-logo

Java Numeric Formatting

I can think of numerous times when I have seen others write unnecessary Java code and I have written unnecessary Java code because of lack of awareness of a JDK class that already provides the desired functionality. One example of this is the writing of time-related constants using hard-coded values such as 60, 24, 1440, and 86400 when TimeUnit provides a better, standardized approach. In this post, I look at another example of a class that provides the functionality I have seen developers often implement on their one: NumberFormat. The NumberFormat class is part of the java.text package, which also includes the frequently used DateFormat and SimpleDateFormat classes. NumberFormat is an abstract class (no public constructor) and instances of its descendants are obtained via overloaded static methods with names such as getInstance(), getCurrencyInstance(), and getPercentInstance(). Currency The next code listing demonstrates calling NumberFormat.getCurrencyInstance(Locale) to get an instance of NumberFormat that presents numbers in a currency-friendly format. Demonstrating NumberFormat’s Currency Support /** * Demonstrate use of a Currency Instance of NumberFormat. */ public void demonstrateCurrency() { writeHeaderToStandardOutput("Currency NumberFormat Examples"); final NumberFormat currencyFormat = NumberFormat.getCurrencyInstance(Locale.US); out.println("15.5 -> " + currencyFormat.format(15.5)); out.println("15.54 -> " + currencyFormat.format(15.54)); out.println("15.345 -> " + currencyFormat.format(15.345)); // rounds to two decimal places printCurrencyDetails(currencyFormat.getCurrency()); }/** * Print out details of provided instance of Currency. * * @param currency Instance of Currency from which details * will be written to standard output. */ public void printCurrencyDetails(final Currency currency) { out.println("Concurrency: " + currency); out.println("\tISO 4217 Currency Code: " + currency.getCurrencyCode()); out.println("\tISO 4217 Numeric Code: " + currency.getNumericCode()); out.println("\tCurrency Display Name: " + currency.getDisplayName(Locale.US)); out.println("\tCurrency Symbol: " + currency.getSymbol(Locale.US)); out.println("\tCurrency Default Fraction Digits: " + currency.getDefaultFractionDigits()); } When the above code is executed, the results are as shown next: ================================================================================== = Currency NumberFormat Examples ================================================================================== 15.5 -> $15.50 15.54 -> $15.54 15.345 -> $15.35 Concurrency: USD ISO 4217 Currency Code: USD ISO 4217 Numeric Code: 840 Currency Display Name: US Dollar Currency Symbol: $ Currency Default Fraction Digits: 2 The above code and associated output demonstrate that the NumberFormat instance used for currency (actually a DecimalFormat), automatically applies the appropriate number of digits and appropriate currency symbol based on the locale. Percentages The next code listings and associated output demonstrate use of NumberFormat to present numbers in percentage-friendly format. Demonstrating NumberFormat’s Percent Format /** * Demonstrate use of a Percent Instance of NumberFormat. */ public void demonstratePercentage() { writeHeaderToStandardOutput("Percentage NumberFormat Examples"); final NumberFormat percentageFormat = NumberFormat.getPercentInstance(Locale.US); out.println("Instance of: " + percentageFormat.getClass().getCanonicalName()); out.println("1 -> " + percentageFormat.format(1)); // will be 0 because truncated to Integer by Integer division out.println("75/100 -> " + percentageFormat.format(75/100)); out.println(".75 -> " + percentageFormat.format(.75)); out.println("75.0/100 -> " + percentageFormat.format(75.0/100)); // will be 0 because truncated to Integer by Integer division out.println("83/93 -> " + percentageFormat.format((83/93))); out.println("93/83 -> " + percentageFormat.format(93/83)); out.println(".5 -> " + percentageFormat.format(.5)); out.println(".912 -> " + percentageFormat.format(.912)); out.println("---- Setting Minimum Fraction Digits to 1:"); percentageFormat.setMinimumFractionDigits(1); out.println("1 -> " + percentageFormat.format(1)); out.println(".75 -> " + percentageFormat.format(.75)); out.println("75.0/100 -> " + percentageFormat.format(75.0/100)); out.println(".912 -> " + percentageFormat.format(.912)); } ================================================================================== = Percentage NumberFormat Examples ================================================================================== 1 -> 100% 75/100 -> 0% .75 -> 75% 75.0/100 -> 75% 83/93 -> 0% 93/83 -> 100% .5 -> 50% .912 -> 91% ---- Setting Minimum Fraction Digits to 1: 1 -> 100.0% .75 -> 75.0% 75.0/100 -> 75.0% .912 -> 91.2% The code and output of the percent NumberFormat usage demonstrate that by default the instance of NumberFormat (actually a DecimalFormat in this case) returned by NumberFormat.getPercentInstance(Locale) method has no fractional digits, multiplies the provided number by 100 (assumes that it is the decimal equivalent of a percentage when provided), and adds a percentage sign (%). Integers The small amount of code shown next and its associated output demonstrate use of NumberFormat to present numbers in integral format. Demonstrating NumberFormat’s Integer Format /** * Demonstrate use of an Integer Instance of NumberFormat. */ public void demonstrateInteger() { writeHeaderToStandardOutput("Integer NumberFormat Examples"); final NumberFormat integerFormat = NumberFormat.getIntegerInstance(Locale.US); out.println("7.65 -> " + integerFormat.format(7.65)); out.println("7.5 -> " + integerFormat.format(7.5)); out.println("7.49 -> " + integerFormat.format(7.49)); out.println("-23.23 -> " + integerFormat.format(-23.23)); } ================================================================================== = Integer NumberFormat Examples ================================================================================== 7.65 -> 8 7.5 -> 8 7.49 -> 7 -23.23 -> -23 As demonstrated in the above code and associated output, the NumberFormat method getIntegerInstance(Locale) returns an instance that presents provided numerals as integers. Fixed Digits The next code listing and associated output demonstrate using NumberFormat to print fixed-point representation of floating-point numbers. In other words, this use of NumberFormat allows one to represent a number with an exactly prescribed number of digits to the left of the decimal point (“integer” digits) and to the right of the decimal point (“fraction” digits). Demonstrating NumberFormat for Fixed-Point Numbers /** * Demonstrate generic NumberFormat instance with rounding mode, * maximum fraction digits, and minimum integer digits specified. */ public void demonstrateNumberFormat() { writeHeaderToStandardOutput("NumberFormat Fixed-Point Examples"); final NumberFormat numberFormat = NumberFormat.getNumberInstance(); numberFormat.setRoundingMode(RoundingMode.HALF_UP); numberFormat.setMaximumFractionDigits(2); numberFormat.setMinimumIntegerDigits(1); out.println(numberFormat.format(234.234567)); out.println(numberFormat.format(1)); out.println(numberFormat.format(.234567)); out.println(numberFormat.format(.349)); out.println(numberFormat.format(.3499)); out.println(numberFormat.format(0.9999)); } ================================================================================== = NumberFormat Fixed-Point Examples ================================================================================== 234.23 1 0.23 0.34 0.35 1 The above code and associated output demonstrate the fine-grain control of the minimum number of “integer” digits to represent to the left of the decimal place (at least one, so zero shows up when applicable) and the maximum number of “fraction” digits to the right of the decimal point. Although not shown, the maximum number of integer digits and minimum number of fraction digits can also be specified. Conclusion I have used this post to look at how NumberFormat can be used to present numbers in different ways (currency, percentage, integer, fixed number of decimal points, etc.) and often means no or reduced code need be written to massage numbers into these formats. When I first began writing this post, I envisioned including examples and discussion on the direct descendants of NumberFormat (DecimalFormat and ChoiceFormat), but have decided this post is already sufficiently lengthy. I may write about these descendants of NumberFormat in future blog posts.Reference: Java Numeric Formatting from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
career-logo

Programming Language Job Trends Part 1 – August 2014

It is time for the August edition of the programming language job trends! The response to the language list changes was definitely positive, so things will be stable for this edition. In Part 1, we look at Java, C++, C#, Objective C, and Visual Basic. I did look at the trends for Swift, but the demand is not high enough yet. Part 2 (PHP, Python, JavaScript, and others) and Part 3 (Erlang, Groovy, Scala, and others) of the job trends will be posted in the next few days as well. First, we look at the job trends from Indeed.com:      As you can see, there is a definite negative trend over the past three years. Java continues to lead, but its demand is less than half of what it was at its peak in 2009. C++ and C# are following the same trend since 2010, which is a steady decline. I finally determined the appropriate search for Visual Basic, and it shows the clear decline over the entirety of the graph. In this installment, Visual Basic demand finally dips below Objective C. Interestingly, Objective C demand stays fairly flat over the past year. Given this, it makes you wonder whether mobile demand is really that high, or whether it is only replacing some of the application development. Normally, I would look at the short term trends from SimplyHired, but the searches are not working correctly, especially for C++ and C#. As opposed to previous posts, SimplyHired’s data is much more current, but still not useful for analysis purposes. Lastly, here is a review of the relative growth from Indeed:Objective C continues to dominate the relative growth, but it still has a declining trend for over a year. The other languages in this analysis are all seeing a negative trend now, with C# just going negative in the past few months. Visual Basic is showing the largest decline, probably over 80%. Obviously, there are a number of reasons for this type of decline among this group of languages. It is difficult to assess whether mobile development affects these trends, especially because Java and Objective C are the main languages for native Android and iOS apps. The rise of alternative languages like Scala or Clojure could be contributing to this, as well as the rise of data science and machine learning. Breadth of languages being used, including mature languages like Python and Ruby, could also be affecting these trends and we will look at these in the next few days.Reference: Programming Language Job Trends Part 1 – August 2014 from our JCG partner Rob Diana at the Regular Geek blog....
java-logo

Java Concurrency Tutorial – Thread-safe designs

After reviewing what the main risks are when dealing with concurrent programs (like atomicity or visibility), we will go through some class designs that will help us prevent the aforementioned bugs. Some of these designs result in the construction of thread-safe objects, allowing us to share them safely between threads. As an example, we will consider immutable and stateless objects. Other designs will prevent different threads from modifying the same data, like thread-local variables. You can see all the source code at github.     1. Immutable objects Immutable objects have a state (have data which represent the object’s state), but it is built upon construction, and once the object is instantiated, the state cannot be modified. Although threads may interleave, the object has only one possible state. Since all fields are read-only, not a single thread will be able to change object’s data. For this reason, an immutable object is inherently thread-safe. Product shows an example of an immutable class. It builds all its data during construction and none of its fields are modifiable: public final class Product { private final String id; private final String name; private final double price; public Product(String id, String name, double price) { this.id = id; this.name = name; this.price = price; } public String getId() { return this.id; } public String getName() { return this.name; } public double getPrice() { return this.price; } public String toString() { return new StringBuilder(this.id).append("-").append(this.name) .append(" (").append(this.price).append(")").toString(); } public boolean equals(Object x) { if (this == x) return true; if (x == null) return false; if (this.getClass() != x.getClass()) return false; Product that = (Product) x; if (!this.id.equals(that.id)) return false; if (!this.name.equals(that.name)) return false; if (this.price != that.price) return false; return true; } public int hashCode() { int hash = 17; hash = 31 * hash + this.getId().hashCode(); hash = 31 * hash + this.getName().hashCode(); hash = 31 * hash + ((Double) this.getPrice()).hashCode(); return hash; } } In some cases, it won’t be sufficient to make a field final. For example, MutableProduct class is not immutable although all fields are final: public final class MutableProduct { private final String id; private final String name; private final double price; private final List<String> categories = new ArrayList<>(); public MutableProduct(String id, String name, double price) { this.id = id; this.name = name; this.price = price; this.categories.add("A"); this.categories.add("B"); this.categories.add("C"); } public String getId() { return this.id; } public String getName() { return this.name; } public double getPrice() { return this.price; } public List<String> getCategories() { return this.categories; } public List<String> getCategoriesUnmodifiable() { return Collections.unmodifiableList(categories); } public String toString() { return new StringBuilder(this.id).append("-").append(this.name) .append(" (").append(this.price).append(")").toString(); } } Why is the above class not immutable? The reason is we let a reference to escape from the scope of its class. The field ‘categories‘ is a mutable reference, so after returning it, the client could modify it. In order to show this, consider the following program: public static void main(String[] args) { MutableProduct p = new MutableProduct("1", "a product", 43.00); System.out.println("Product categories"); for (String c : p.getCategories()) System.out.println(c); p.getCategories().remove(0); System.out.println("\nModified Product categories"); for (String c : p.getCategories()) System.out.println(c); } And the console output: Product categoriesABCModified Product categoriesBC Since categories field is mutable and it escaped the object’s scope, the client has modified the categories list. The product, which was supposed to be immutable, has been modified, leading to a new state. If you want to expose the content of the list, you could use an unmodifiable view of the list: public List<String> getCategoriesUnmodifiable() { return Collections.unmodifiableList(categories); } 2. Stateless objects Stateless objects are similar to immutable objects but in this case, they do not have a state, not even one. When an object is stateless it does not have to remember any data between invocations. Since there is no state to modify, one thread will not be able to affect the result of another thread invoking the object’s operations. For this reason, a stateless class is inherently thread-safe. ProductHandler is an example of this type of objects. It contains several operations over Product objects and it does not store any data between invocations. The result of an operation does not depend on previous invocations or any stored data: public class ProductHandler { private static final int DISCOUNT = 90; public Product applyDiscount(Product p) { double finalPrice = p.getPrice() * DISCOUNT / 100; return new Product(p.getId(), p.getName(), finalPrice); } public double sumCart(List<Product> cart) { double total = 0.0; for (Product p : cart.toArray(new Product[0])) total += p.getPrice(); return total; } } In its sumCart method, the ProductHandler converts the product list to an array since for-each loop uses an iterator internally to iterate through its elements. List iterators are not thread-safe and could throw a ConcurrentModificationException if modified during iteration. Depending on your needs, you might choose a different strategy. 3. Thread-local variables Thread-local variables are those variables defined within the scope of a thread. No other threads will see nor modify them. The first type is local variables. In the below example, the total variable is stored in the thread’s stack: public double sumCart(List<Product> cart) { double total = 0.0; for (Product p : cart.toArray(new Product[0])) total += p.getPrice(); return total; } Just take into account that if instead of a primitive you define a reference and return it, it will escape its scope. You may not know where the returned reference is stored. The code that calls sumCartmethod could store it in a static field and allow it being shared between different threads. The second type is ThreadLocal class. This class provides a storage independent for each thread. Values stored into an instance of ThreadLocal are accessible from any code within the same thread. The ClientRequestId class shows an example of ThreadLocal usage: public class ClientRequestId { private static final ThreadLocal<String> id = new ThreadLocal<String>() { @Override protected String initialValue() { return UUID.randomUUID().toString(); } }; public static String get() { return id.get(); } } The ProductHandlerThreadLocal class uses ClientRequestId to return the same generated id within the same thread: public class ProductHandlerThreadLocal { //Same methods as in ProductHandler class public String generateOrderId() { return ClientRequestId.get(); } } If you execute the main method, the console output will show different ids for each thread. As an example: T1 - 23dccaa2-8f34-43ec-bbfa-01cec5df3258T2 - 936d0d9d-b507-46c0-a264-4b51ac3f527dT2 - 936d0d9d-b507-46c0-a264-4b51ac3f527dT3 - 126b8359-3bcc-46b9-859a-d305aff22c7e... If you are going to use ThreadLocal, you should care about some of the risks of using it when threads are pooled (like in application servers). You could end up with memory leaks or information leaking between requests. I won’t extend myself in this subject since the post How to shoot yourself in foot with ThreadLocals explains well how this can happen. 4. Using synchronization Another way of providing thread-safe access to objects is through synchronization. If we synchronize all accesses to a reference, only a single thread will access it at a given time. We will discuss this on further posts. 5. Conclusion We have seen several techniques that help us build simpler objects that can be shared safely between threads. It is much harder to prevent concurrent bugs if an object can have multiple states. On the other hand, if an object can have only one state or none, we won’t have to worry about different threads accessing it at the same time.Reference: Java Concurrency Tutorial – Thread-safe designs from our JCG partner Xavier Padro at the Xavier Padró’s Blog blog....
postgresql-logo

Integrating jOOQ with PostgreSQL: Partitioning

Introduction jOOQ is a great framework when you want to work with SQL in Java without having too much ORM in your way. At the same time, it can be integrated into many environments as it is offering you support for many database-specific features. One such database-specific feature is partitioning in PostgreSQL. Partitioning in PostgreSQL is mainly used for performance reasons because it can improve query performance in certain situations. jOOQ has no explicit support for this feature but it can be integrated quite easily as we will show you.This article is brought to you by the Germany based jOOQ integration partner UWS Software Service (UWS). UWS is specialised in custom software development, application modernisation and outsourcing with a distinct focus on the Java Enterprise ecosystem. Partitioning in PostgreSQL With the partitioning feature of PostgreSQL you have the possibility of splitting data that would form a huge table into multiple separate tables. Each of the partitions is a normal table which inherits its columns and constraints from a parent table. This so-called table inheritance can be used for “range partitioning” where, for example, the data from one range does not overlap the data from another range in terms of identifiers, dates or other criteria. Like in the following example, you can have partitioning for a table “author” that shares the same foreign-key of a table “authorgroup” in all its rows. CREATE TABLE author ( authorgroup_id int, LastName varchar(255) );CREATE TABLE author_1 ( CONSTRAINT authorgroup_id_check_1 CHECK ((authorgroup_id = 1)) ) INHERITS (author);CREATE TABLE author_2 ( CONSTRAINT authorgroup_id_check_2 CHECK ((authorgroup_id = 2)) ) INHERITS (author);... As you can see, we set up inheritance and – in order to have a simple example – we just put one constraint checking that the partitions have the same “authorgroup_id”. Basically, this results in the “author” table only containing table and column definitions, but no data. However, when querying the “author” table, PostgreSQL will really query all the inheriting “author_n” tables returning a combined result. A trivial approach to using jOOQ with partitioning In order to work with the partitioning described above, jOOQ offers several options. You can use the default way which is to let jOOQ generate one class per table. In order to insert data into multiple tables, you would have to use different classes. This approach is used in the following snippet: // add InsertQuery query1 = dsl.insertQuery(AUTHOR_1); query1.addValue(AUTHOR_1.ID, 1); query1.addValue(AUTHOR_1.LAST_NAME, "Nowak"); query1.execute();InsertQuery query2 = dsl.insertQuery(AUTHOR_2); query2.addValue(AUTHOR_2.ID, 1); query2.addValue(AUTHOR_2.LAST_NAME, "Nowak"); query2.execute();// select Assert.assertTrue(dsl .selectFrom(AUTHOR_1) .where(AUTHOR_1.LAST_NAME.eq("Nowak")) .fetch().size() == 1);Assert.assertTrue(dsl .selectFrom(AUTHOR_2) .where(AUTHOR_2.LAST_NAME.eq("Nowak")) .fetch().size() == 1); You can see that multiple classes generated by jOOQ need to be used, so depending on how many partitions you have, generated classes can pollute your codebase. Also, imagine that you eventually need to iterate over partitions, which would be cumbersome to do with this approach. Another approach could be that you use jOOQ to build fields and tables using string manipulation but that is error prone again and prevents support for generic type safety. Also, consider the case where you want true data separation in terms of multi-tenancy. You see that there are some considerations to do when working with partitioning. Fortunately jOOQ offers various ways of working with partitioned tables, and in the following we’ll compare approaches, so that you can choose the one most suitable for you. Using jOOQ with partitioning and multi-tenancy JOOQ’s runtime-schema mapping is often used to realize database environments, such that for example during development, one database is queried but when deployed to production, the queries are going to another database. Multi-tenancy is another recommended use case for runtime-schema mapping as it allows for strict partitioning and for configuring your application to only use databases or tables being configured in the runtime-schema mapping. So running the same code would result in working with different databases or tables depending on the configuration, which allows for true separation of data in terms of multi-tenancy. The following configuration taken from the jOOQ documentation is executed when creating the DSLContext so it can be considered a system-wide setting: Settings settings = new Settings() .withRenderMapping(new RenderMapping() .withSchemata( new MappedSchema().withInput("DEV") .withOutput("MY_BOOK_WORLD") .withTables( new MappedTable().withInput("AUTHOR") .withOutput("AUTHOR_1"))));// Add the settings to the Configuration DSLContext create = DSL.using( connection, SQLDialect.ORACLE, settings);// Run queries with the "mapped" configuration create.selectFrom(AUTHOR).fetch();// results in SQL: // “SELECT * FROM MY_BOOK_WORLD.AUTHOR_1” Using this approach you can map one table to one partition permanently eg. “AUTHOR” to “AUTHOR_1” for environment “DEV”. In another environment you could choose to map “AUTHOR” table to “AUTHOR_2”. Runtime-schema mapping only allows you to map to exactly one table on a per-query basis, so you could not handle the use case where you would want to manipulate more than one table partition. If you would like to have more flexibility you might want to consider the next approach. Using jOOQ with partitioning and without multi-tenancy If you need to handle multiple table partitions without having multi-tenancy, you need a more flexible way of accessing partitions. The following example shows how you can do it in a dynamic and type safe way, avoiding errors and being usable in the same elegant way you are used to by jOOQ: // add for(int i=1; i<=2; i++) { Builder part = forPartition(i); InsertQuery query = dsl.insertQuery(part.table(AUTHOR)); query.addValue(part.field(AUTHOR.ID), 1); query.addValue(part.field(AUTHOR.LAST_NAME), "Nowak"); query.execute(); }// selectfor(int i=1; i<=2; i++) { Builder part = forPartition(i); Assert.assertTrue(dsl .selectFrom(part.table(AUTHOR)) .where(part.field(AUTHOR.LAST_NAME).eq("Nowak")) .fetch() .size() == 1); } What you can see above is that the partition numbers are abstracted away so that you can use “AUTHOR” table instead of “AUTHOR_1”. Thus, your code won’t be polluted with many generated classes. Another thing is that the the partitioner object is initialized dynamically so you can use it for example in a loop like above. Also it follows the Builder pattern so that you can operate on it like you are used to by jOOQ. The code above is doing exactly the same as the first trivial snippet, but there are multiple benefits like type safe and reusable access to partitioned tables. Integration of jOOQ partitioning without multi-tenancy into a Maven build process (optional) If you are using Continuous-Integration you can integrate the solution above so that jOOQ is not generating tables for the partitioned tables. This can be achieved using a regular expression that excludes certain table names when generating Java classes. When using Maven, your integration might look something like this: <generator> <name>org.jooq.util.DefaultGenerator</name> <database> <name>org.jooq.util.postgres.PostgresDatabase</name> <includes>.*</includes> <excludes>.*_[0-9]+</excludes> <inputSchema>${db.schema}</inputSchema> </database> <target> <packageName>com.your.company.jooq</packageName> <directory>target/generated-sources/jooq</directory> </target> </generator> Then it’s just calling mvn install and jOOQ maven plugin will be generating the database schema in compilation time. Integrating jOOQ with PostgreSQL: Partitioning This article described how jOOQ in combination with the partitioning feature of PostgreSQL can be used to implement multi-tenancy and improve database performance. PostgreSQL’s documentation states that for partitioning “the benefits will normally be worthwhile only when a table would otherwise be very large. The exact point at which a table will benefit from partitioning depends on the application, although a rule of thumb is that the size of the table should exceed the physical memory of the database server.” Achieving support for partitioning with jOOQ is as easy as adding configuration or a small utility class, jOOQ is then able to support partitioning with or without multi-tenancy and without sacrificing type safety. Apart from Java-level integration, the described solution also smoothly integrates into your build and test process. You may want to look at the sources of the partitioner utility class which also includes a test-class so that you can see the behavior and integration in more detail. Please let us know if you need support for this or other jOOQ integrations within your environment. UWS Software Service (UWS) is an official jOOQ integration partner.Reference: Integrating jOOQ with PostgreSQL: Partitioning from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
software-development-2-logo

Feature Toggles (Feature Switches or Feature Flags) vs Feature Branches

Feature Branches If you are using branches, you are not doing Continuous Integration / Deployment / Delivery! You might have great Code Coverage with unit tests, you might be doing TDD, you might have functional and integrations tests written in BDD format and you might run all of them on every commit to the repository. However, if you are having branches, integration is delayed until they are merged and that means that there is no continuous integration.   People tend to like feature branches. They provide flexibility to decide what to release and when. All there is to do is to merge features that will be released into the main branch and deploy to production. The problem with this approach is the delay. Integration is delayed until the merge. We cannot know whether all those separately developed features work well together until the merge is done and all the unit, functional, integration and manual tests are run. On top of that, there is the problem caused with the pain of merge itself. Days, weeks or even months of work are suddenly merged into the main branch and from there to all not-yet-to-be-released branches. The idea of Continuous Integration is to detect problems as soon as possible. Minimum delay until problems are found. The most common way to mitigate this problem is to have feature branches small and short-lived. They can be merged into the main branch after only a day or two. However, in many cases this is simply not possible. Feature might be too big to be developed in only few days. Business might require us to release it only in conjunction with other features. In other words, feature branches are great for allowing us to decide what to release and when. They provide us the flexibility but they prevent us from doing Continuous Integration. If there would be only one branch, we could do continuous integration but the release problems that were solved with feature branches would come back to haunt us. Feature Toggles Feature Toggles (sometimes called Feature Switches or Feature Flags) should solve the need to be able to deploy only selected set of features while keeping only one (main) branch. With them we can do all the work in one branch, have continuous integration taking care of the quality of our code and use flags to turn features off until they are ready to be released. We can have all the benefits of Continuous Integration together with the flexibility to choose which features will be available and which will be hidden. Moreover, it is a step towards Continuous Deployment. If we do have satisfying automated test coverage and can switch features off and on, there is nothing really preventing us from deploying to production each commit that passed all verification. Even if some bug sneaks into the production, with Feature Toggles if would be very easy to turn that feature off until it’s fixed. The basic idea is to have a configuration file that defines toggles for features that are pending to be fully released. Alternative to a configuration file can be a database table. The application should use those toggles to decide whether to make some feature available to the users or not. They might even be used to display some functionality to a subset of users based on their role, geographic location or random sample. There are only few rules to be followed:Use toggles only until they are fully deployed and proven to work. Otherwise, you might end up with “spaghetti code” full of if/else statements containing old toggles that are not in use any more. Do not spend too much time testing toggles. It is in most cases enough to confirm that the entry point into some new feature is not visible. That can be, for example, link to that new feature. Do not overuse toggles. Do not use them when there is no need for them. For example, you might be developing a new screen that is accessible through the link in the home page. If that link is added at the end, there might be no need to have toggle that hides it.Examples There are many libraries that provide the solution for Feature Toggles. However, implementing them is so easy that you might choose to do it yourself. Here are few examples of possible implementations of Feature Toggles. They use AngularJS but the logic behind them could be applied to any other language or framework. [JSON with Feature Toggles] { "feature1": { "displayed": false }, "feature2": { "displayed": true }, "feature3": { "displayed": false } } I tend to have more values in my toggles JSON. Some of them as disabled, description, allowed_users, etc. The above example is only the bare bones minimal solution. Next, we should load the JSON into, in this example, AngularJS scope. [AngularJS that loads Feature Toggles into the scope] $http.get('/api/v1/data/features').then(function(response) { $scope.features = response.data; }); Once Feature Toggles are in the scope, the rest is fairly easy. Following AngularJS example hides the feature when displayed is set to false. [AngularJS HTML that hides some element] <div ng-show="features.feature1.displayed"> <!--Feature HTML--> </div> In some cases, you might be working on a completely new screen that will replace the old one. In that scenario something like the following might be the solution. [AngularJS that returns URL depending on Feature Toggle] $scope.getMyUrl = function() { if($scope.features.feature2.displayed) { return 'myNewUrl'; } else { return 'myOldUrl'; } } Then, from the HTML, it would be something like the example below. [AngularJS HTML that toggles URL] <a href="{{getMyUrl()}}"> This link leads somewhere </a> In some other cases change might be on the server. Following REST API best practices, you’d create a new API version and use Feature Toggle to decide which one to use. Code could be like following. [AngularJS that performs request that depends on Feature Toggle] var apiUrl; if($scope.features.feature2.displayed) { apiUrl = '/api/v2/myFancyFeature'; } else { apiUrl = '/api/v1/myFancyFeature'; } $http.get(apiUrl).then(function(response) { // Do something with the response }); Same logic applied to the front-end could be applied to the back-end or any other type of the application. In most cases it boils down to simple if/else statements.Reference: Feature Toggles (Feature Switches or Feature Flags) vs Feature Branches from our JCG partner Viktor Farcic at the Technology conversations blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close