Featured FREE Whitepapers

What's New Here?

java-logo

Decorate with decorator design pattern

Decorator pattern is one of the widely used structural patterns. This pattern dynamically changes the functionality of an object at runtime without impacting the existing functionality of the objects. In short this pattern adds additional functionalities to the object by wrapping it. Problem statement: Imagine a scenario where we have a pizza which is already baked with tomato and cheese. After that you just recall that you need to put some additional topping at customer’s choice. So you will need to give some additional toppings like chicken and pepper on the go. Intent: Add or remove additional functionalities or responsibilities from the object dynamically without impacting the original object. At times it is required when addition of functionalities is not possible by subclassing as it might create loads of subclasses. Solution: So in this case we are not using inheritance to add additional functionalities to the object i.e. pizza, instead we are using composition. This pattern is useful when we don’t want to use inheritance and rather use composition. StructureFollowing are the participants of the Decorator Design pattern:Component – this is the wrapper which can have additional responsibilities associated with it at runtime. Concrete component- is the original object to which the additional functionalities are added. Decorator-this is an abstract class which contains a reference to the component object and also implements the component interface. Concrete decorator-they extend the decorator and builds additional functionality on top of the Component class.Example:In the above example the Pizza class acts as the Component and BasicPizza is the concrete component which needs to be decorated. The PizzaDecorator acts as a Decorator abstract class which contains a reference to the Pizza class. The ChickenTikkaPizza is the ConcreteDecorator which builds additional functionality to the Pizza class. Let’s summarize the steps to implement the decorator design pattern:Create an interface to the BasicPizza(Concrete Component) that we want to decorate. Create an abstract class PizzaDecorator that contains reference field of Pizza(decorated) interface. Note: The decorator(PizzaDecorator) must extend same decorated(Pizza) interface. We will need to now pass the Pizza object that you want to decorate in the constructor of decorator. Let us create Concrete Decorator(ChickenTikkaPizza) which should provide additional functionalities of additional topping. The Concrete Decorator(ChickenTikkaPizza) should extend the PizzaDecorator abstract class. Redirect methods of decorator (bakePizza()) to decorated class’s core implementation. Override methods(bakePizza()) where you need to change behavior e.g. addition of the Chicken Tikka topping. Let the client class create the Component type (Pizza) object by creating a Concrete Decorator(ChickenTikkaPizza) with help from Concrete Component(BasicPizza). To remember in short : New Component = Concrete Component + Concrete DecoratorPizza pizza = new ChickenTikkaPizza(new BasicPizza()); Code Example: BasicPizza.java public String bakePizza() { return 'Basic Pizza'; } Pizza.java public interface Pizza { public String bakePizza(); } PizzaDecorator.java public abstract class PizzaDecorator implements Pizza { Pizza pizza; public PizzaDecorator(Pizza newPizza) { this.pizza = newPizza; } @Override public String bakePizza() { return pizza.bakePizza(); } } ChickenTikkaPizza.java public class ChickenTikkaPizza extends PizzaDecorator { public ChickenTikkaPizza(Pizza newPizza) { super(newPizza); } public String bakePizza() { return pizza.bakePizza() + ' with Chicken topping added'; } } Client.java public static void main(String[] args) { Pizza pizza = new ChickenTikkaPizza(new BasicPizza()); System.out.println(pizza.bakePizza());} Benefits: Decorator design pattern provide more flexibility than the standard inheritance. Inheritance also extends the parent class responsibility but in a static manner. However decorator allows doing this in dynamic fashion. Drawback: Code debugging might be difficult since this pattern adds functionality at runtime. Interesting points:Adapter pattern plugs different interfaces together whereas decorator pattern enhances the functionality of the object. Unlike Decorator Pattern the Strategy pattern changes the original object without wrapping it. While Proxy pattern controls access to the object the decorator pattern enhances the functionality of the object.Both Composite and Decorator pattern uses the same tree structure but there are subtle differences between both of them. We can use composite pattern when we need to keep the group of objects having similar behavior inside another object. However decorator pattern is used when we need to modify the functionality of the object at runtime. There are various live examples of decorator pattern in Java API.java.io.BufferedReader; java.io.FileReader; java.io.Reader;If we see the constructor of the BufferedReader then we can see that the BufferedReader wraps the Reader class by adding more features e.g. readLine() which is not present in the reader class. We can use the same format like the above example on how the client uses the decorator pattern new BufferedReader(new FileReader(new File(“File1.txt”))); Similarly the BufferedInputStream is a decorator for the decorated object FileInputStream. BufferedInputStream bs = new BufferedInputStream(new FileInputStream(new File(“File1.txt”)));   Reference: Gang of Four – Decorate with decorator design pattern from our JCG partner Mainak Goswami at the Idiotechie blog. ...
java-logo

Changes to String.substring in Java 7

It is common knowledge that Java optimizes the substring operation for the case where you generate a lot of substrings of the same source string. It does this by using the (value, offset, count) way of storing the information. See an example below:In the above diagram you see the strings ‘Hello’ and ‘World!’ derived from ‘Hello World!’ and the way they are represented in the heap: there is one character array containing ‘Hello World!’ and two references to it. This method of storage is advantageous in some cases, for example for a compiler which tokenizes source files. In other instances it may lead you to an OutOfMemorError (if you are routinely reading long strings and only keeping a small part of it – but the above mechanism prevents the GC from collecting the original String buffer). Some even call it a bug. I wouldn’t go so far, but it’s certainly a leaky abstraction because you were forced to do the following to ensure that a copy was made: new String(str.substring(5, 6)).This all changed in May of 2012 or Java 7u6. The pendulum is swung back and now full copies are made by default. What does this mean for you?For most probably it is just a nice piece of Java trivia If you are writing parsers and such, you can not rely any more on the implicit caching provided by String. You will need to implement a similar mechanism based on buffering and a custom implementation of CharSequence If you were doing new String(str.substring) to force a copy of the character buffer, you can stop as soon as you update to the latest Java 7 (and you need to do that quite soon since Java 6 is being EOLd as we speak).Thankfully the development of Java is an open process and such information is at the fingertips of everyone! A couple of more references (since we don’t say pointers in Java) related to Strings:If you are storing the same string over and over again (maybe you’re parsing messages from a socket for example), you should read up on alternatives to String.intern() (and also consider reading chapter 50 from the second edition of Effective Java: Avoid strings where other types are more appropriate) Look into (and do benchmarks before using them!) options like UseCompressedStrings (which seems to have been removed), UseStringCache and StringCacheHope I didn’t strung you along too much and you found this useful! Until next time – Attila Balazs Meta: this post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on! Want to write for the blog? We are looking for contributors to fill all 24 slot and would love to have your contribution! Contact Attila Balazs to contribute!   Reference: Changes to String.substring in Java 7 from our JCG partner Attila-Mihaly Balazs at the Java Advent Calendar blog. ...
software-development-2-logo

The Big List of 256 Programming Languages

The holiday season typically brings lots of vacation time for people. Instead of sitting around and being lazy, why not take the time to learn a new programming language? I am not recommending a specific language over others at this time, but providing a long list of languages based on GitHub and TIOBE. I have not tried to categorize or validate this list of languages in any way, so please do not complain about some ancient or useless technology being listed. If you think there is a language that should be added, please leave it in a comment along with a link with information about the language, preferably on Wikipedia or the actual language site.         I give no guarantees that the links for these languages are what was meant by GitHub or TIOBE, but they do not link to an official site for the languages so I did my best in finding something.4th Dimension/4D ABAP ABC ActionScript Ada Agilent VEE Algol Alice Angelscript Apex APL AppleScript Arc Arduino ASP AspectJ Assembly ATLAS Augeas AutoHotkey AutoIt AutoLISP Automator Avenue Awk Bash (Visual) Basic bc BCPL BETA BlitzMax Boo Bourne Shell Bro C C Shell C# C++ C++/CLI C-Omega Caml Ceylon CFML cg Ch CHILL CIL CL (OS/400) Clarion Clean Clipper Clojure CLU COBOL Cobra CoffeeScript ColdFusion COMAL Common Lisp Coq cT Curl D Dart DCL DCPU-16 ASM Delphi/Object Pascal DiBOL Dylan E eC Ecl ECMAScript EGL Eiffel Elixir Emacs Lisp Erlang Etoys Euphoria EXEC F# Factor Falcon Fancy Fantom Felix Forth Fortran Fortress (Visual) FoxPro Gambas GNU Octave Go Google AppsScript Gosu Groovy Haskell haXe Heron HPL HyperTalk Icon IDL Inform Informix-4GL INTERCAL Io Ioke J J# JADE Java Java FX Script JavaScript JScript JScript.NET Julia Korn Shell Kotlin LabVIEW Ladder Logic Lasso Limbo Lingo Lisp Logo Logtalk LotusScript LPC Lua Lustre M4 MAD Magic Magik Malbolge MANTIS Maple Mathematica MATLAB Max/MSP MAXScript MEL Mercury Mirah Miva ML Monkey Modula-2 Modula-3 MOO Moto MS-DOS Batch MUMPS NATURAL Nemerle Nimrod NQC NSIS Nu NXT-G Oberon Object Rexx Objective-C Objective-J OCaml Occam ooc Opa OpenCL OpenEdge ABL OPL Oz Paradox Parrot Pascal Perl PHP Pike PILOT PL/I PL/SQL Pliant PostScript POV-Ray PowerBasic PowerScript PowerShell Processing Prolog Puppet Pure Data Python Q R Racket REALBasic REBOL Revolution REXX RPG (OS/400) Ruby Rust S S-PLUS SAS Sather Scala Scheme Scilab Scratch sed Seed7 Self Shell SIGNAL Simula Simulink Slate Smalltalk Smarty SPARK SPSS SQR Squeak Squirrel Standard ML Suneido SuperCollider TACL Tcl Tex thinBasic TOM Transact-SQL Turing TypeScript Vala/Genie VBScript Verilog VHDL VimL Visual Basic .NET WebDNA Whitespace X10 xBase XBase++ Xen XPL XSLT XQuery yacc Yorick Z shellSo, did you find one that you liked? Or did this stir up memories from long ago with languages you thought were dead and buried? Again, if there is a language you believe belongs in this list, please leave a comment and a wikipedia or official site link for the language. Related articlesOn Programming Languages (raganwald.posterous.com) Polyglot Programmer (crowdint.com) New Programming Language Makes Social Coding Easier (technologyreview.in)  Reference: The Big List of 256 Programming Languages from our JCG partner Rob Diana at the Regular Geek blog. ...
java-logo

Dynamic hot-swap environment inside Java with atomic updates

One could argue that the above title can be shortened as OSGi, and I want to discard that thought process at the very beginning. No offense to OSGi, it is a great specification which got messed up at the implementation layer or at the usability layer, which is what I believe about OSGi. You could of-course do this using OSGi but with some custom work as well. The downside of using OSGi to solve this issue is the unwanted complexity that is introduced on the development process. We were inspired of the JRebel, and we thought for a moment that something on that line is what we wanted and soon realized, we do not want to go into the byte code injection on a production grade runtime. So lets analyze the problem domain. Problem Domain The problem we are trying to address is related to the UltraESB, to be specific the live updates feature. UltraESB supports atomically updating/adding new configuration fragments (referred to as “Deployment Units“) to a running ESB, without any down time and most importantly without any inconsistent states. However, one of the limitations of this feature was that if a particular Java class resides in the user class space requires a change for this configuration update of the deployment unit, it required a restart of the JVM. While this is affordable in a clustered deployment (with round-robin restart), in a single instance deployment this introduced a downtime to the whole system. We had to make sure we preserve few guarantees in solving this;While deployment unit is being updated the messages already accepted by that unit should use all resources including the loaded classes (and any new class to be loaded) of the existing unit, while any new messages (after completing the update of the unit) has to be dispatched to the new deployment unit configuration and the resource base, which we call the “Guarantee of Consistency“. To make sure this we need to manage 2 (or more for that matter) versions of the same class on the same JVM for the respective deployment units to use the classes in an atomic manner. Lets call this the “Guarantee of Atomicity” of a deployment unit. A deployment unit configuration may contain Java fragments which are compiled on-the-fly at the update, which may contain dependencies to the updated classes, having the compiler to be able to locate the new version of the class for compilation process. This is the “Guarantee of Correctness“ The process of updating has to be transparent to the users (they do not need to worry about this, neither on the development time nor on the deployment time) and the whole process should be simple. Lets call this the “Guarantee of Simplicity“Now you will understand this problem to be something more than OSGi as the compilation is something that OSGi won’t be able to solve on its own (AFAIK) at least at the time I am writing this blog. If I come back to OSGi, to make sure it is crystal clear, why we didn’t go on that path?, lets analyze the requirement in detail. What we really want is not completely a modular JVM, rather a specific space inside the JVM to be dynamic and atomically re-loadable. Mapping this to our actual use-case, it is making sure that anything that user writes and plugs into the ESB (i.e. a deployment unit containing proxy services, sequences and mediation logic) is dynamically atomically re-loadable, versionable but not the ESB as in the ESB core which executes the user code. This is what users have asked from us and not how you are going to add another feature to the ESB at runtime, without a restart. I agree it is cool to be able to add new features but no body seem to want that and the complexity associated with it. We were ready to do any sort of a complex work, but we were not ready to pass that or any variation of that complexity into our users. Proposed Solution If you want the first 2 guarantees “Consistency/Atomicity” (being able to have 2 versions of the same class loaded in runtime and using the right class in the right task out of those) in Java, you have no other-way than writing a new class loader, which forces the JVM to do Child First Class Loading. JVM standard class loaders are all Parent-First. WebAppClassLoader of a typical application container is very close to what we wanted, but with dynamic reloading at production environments. The old class space and the new class space should be managed by 2 instances of this class loader to be able to safely isolate the 2 versions. To understand the above fact it is important to understand how JVM identifies the classes. Even though from a Java language perspective classes are uniquely identified by the FQN which is the “package name + class name”, from the JVM perspective in addition to the above notion, the class loader which has loaded this class is also a fact in deciding a uniqueness of a class. In OSGi like environments, this is why you see ClassCastException even though you are casting to the correct type. So the conclusion is that we need to write a class loader and keep separate instances of that class loader for different version of different deployment units, which are re-loadable. In order to make sure that, on-the-fly compiler sees the correct classes to compile the sequence fragments, guaranteeing the “Correctness“, there needs to be a JavaFileManager implementation, again to look for the updated class space. Java compiler task, javac, is searching the dependencies to compile a class via the specified file manager, as JavaFileObject instances and not via a class loader as Class objects, this is to make sure that the compiler effectively resolves the classes as there can be dependencies among the classes being compiled. Further the user shouldn’t be asked to place jar files in a versionned file structure, to not to affect the guarantee of “Simplicity“, rather the ESB itself has to manage this jar file versionning to make sure that we do not mix different versions of the class spaces. This is also important for the correct operation of the compiler task in different versions as the compiler uses Memory Mapped files to read the class definitions over the input stream to the classes provided by the file manager forcing the maintenance of a physical copy of each and every version of the jar files/classes. Execution of the Implementation Let me first point you to the complete changeset which you can refer to time to time while reading the implementation. We have identified 3 key spaces to be implemented first of which is a class loader to provide classes of the users class space. We name it the HotSwapClassLoader (I am not going to show you the code snippets in the blog, please do not hesitate to browse the complete code, keeping in mind the terms of the AGPL license, as the code is AGPL). Now we wanted to associate this class loader for a version of the deployment unit, which is inherently supported in UltraESB as it keeps these as separate Spring sub contexts. So any new deployment unit configuration creation including a new version of an existing deployment unit will instantiate a new instance of this class loader and uses that as the resource/class loader for the deployment unit configuration. The class loader at the initialization calculates a normalized hash value of the user class space, and checks whether there is an existing copy of the class space for the current version and it uses that copy or creates a new copy depending on the above assertion. This hash and reusing the existing copy of a class space prevents the management of 2 copies of the same user class space version, as this whole process is synchronized on a static final lock. Then it operates on that copy of the user class space. This copying is a must to not to let the user worry about the class versioning and to make sure the correct set of classes are used in a given configuration. This class loader also takes extensive measures to make sure that the class space copy is cleaned at its earliest possible time. However that only guarantees an eventual cleanup. The next main item of the implementation is the InMemoryFileManager which was an existing class which got modified to support the user class space in addition to the in-memory compiled source code fragments via the list method as a Iteratable of SwappableJavaFileObject instances. The file manager first queries the HotSwapClassLoader to find the SwappableJavaFileObject instances corresponding to the user class space, and then the system class space and returns as a WrappedIterator which makes sure the user space classes gets the precedence. In the final step of the implementation, after this adjustment/customization of the core JVM features, it was just a matter of using this custom class loader to load the classes for sequences and proxy service and providing the custom file manager for the fragment compilation task of a deployment unit to complete the solution. We also wanted a switch to disable this while it is enabled by default and recommended to have in the production deployment. Facilitating that and few other customizations of the runtime environment, a concept of Environment has been introduced to UltraESB, where the concept has been borrowed from the Grails environments feature. It concluded with a successful implementation of a dynamic runtime, which is “Consistent”, “Atomic”, “Correct” and most importantly “Simple to the users”. Operational Behavior Now that we have the solution implemented, lets look at few UltraESB internals on how this operates at a production deployment. Any deployment unit configuration at the production environment will be updated upon issuing the configuration add or update administration command. This command can be issued either via raw JMX or via any administration tool implemented on top of the JMX operations, such as UTerm, or via UConsole. After this implementation, it doesn’t change anything to the way you do updates, it further enhances with the ability to add/replace jar files with modifications effecting the update into the lib/custom user class space of the UltraESB, which makes sure to pick the updated jar files/classes for the new configuration, upon issuing the per said administrative command after the update. You may try this on the nightly builds of UltraESB or even wait for the 2.0.0 release which is scheduled to be out with lot more new cool yet usable features in the mid January 2013.   Reference: Dynamic hot-swap environment inside Java with atomic updates from our JCG partner Ruwan Linton at the Blind Vision – of Software Engineering and Life blog. ...
devoxx-logo

Devoxx 2012: Java 8 Lambda and Parallelism, Part 1

Overview Devoxx, the biggest vendor-independent Java conference in the world, took place in Atwerp, Belgium on 12 – 16 November. This year it was bigger yet, reaching 3400 attendees from 40 different countries. As last year, I and a small group of colleagues from SAP were there and enjoyed it a lot. After the impressive dance of Nao robots and the opening keynotes, more than 200 conference sessions explored a variety of different technology areas, ranging from Java SE to methodology and robotics. One of the most interesting topics for me was the evolution of the Java language and platform in JDK 8.   My interest was driven partly by the fact that I was already starting work on Wordcounter, and finishing work on another concurrent Java library named Evictor, about which I will be blogging in a future post.In this blog series, I would like to share somewhat more detailed summaries of the sessions on this topic which I attended. These three sessions all took place in the same day, in the same room, one after the other, and together provided three different perspectives on lambdas, parallel collections, and parallelism in general in Java 8.On the road to JDK 8: Lambda, parallel libraries, and more by Joe Darcy Closures and Collections – the World After Eight by Maurice Naftalin Fork / Join, lambda & parallel() : parallel computing made (too ?) easy by Jose PaumardIn this post, I will cover the first session, with the other two coming soon. On the road to JDK 8: Lambda, parallel libraries, and more In the first session, Joe Darcy, a lead engineer of several projects at Oracle, introduced the key changes to the language coming in JDK 8, such as lambda expressions and default methods, summarized the implementation approach, and examined the parallel libraries and their new programming model. The slides from this session are available here. Evolving the Java platform Joe started by talking a bit about the context and concerns related to evolving the language. The general evolution policy for OpenJDK is:Don’t break binary compatibility Avoid introducing source incompatibilities. Manage behavioral compatibility changesThe above list also extends to the language evolution. These rules mean that old classfiles will be always recognized, the cases when currently legal code stops compiling are limited, and changes in the generated code that introduce behavioral changes are also avoided. The goals of this policy are to keep existing binaries linking and running, and to keep existing sources compiling. This has also influenced the sets of features chosen to be implemented in the language itself, as well as how they were implemented. Such concerns were also in effect when adding closures to Java. Interfaces, for example, are a double-edged sword. With the language features that we have today, they cannot evolve compatibly over time. However, in reality APIs age, as people’s expectations how to use them evolve. Adding closures to the language results in a really different programming model, which implies it would be really helpful if interfaces could be evolved compatibly. This resulted in a change affecting both the language and the VM, known as default methods. Project Lambda Project Lambda introduces a coordinated language, library, and VM change. In the language, there are lambda expressions and default methods. In the libraries, there are bulk operations on collections and additional support for parallelism. In the VM, besides the default methods, there are also enhancements to the invokedynamic functionality. This is the biggest change to the language ever done, bigger than other significant changes such as generics. What is a lambda expression? A lambda expression is an anonymous method having an argument list, a return type, and a body, and able to refer to values from the enclosing scope: (Object o) -> o.toString() (Person p) -> p.getName().equals(name) Besides lambda expressions, there is also the method reference syntax: Object::toString() The main benefit of lambdas is that it allows the programmer to treat code as data, store it in variables and pass it to methods. Some history When Java was first introduced in 1995 not many languages had closures, but they are present in pretty much every major language today, even C++. For Java, it has been a long and winding road to get support for closures, until Project Lambda finally started in Dec 2009. The current status is that JSR 335 is in early draft review, there are binary builds available, and it’s expected to become very soon part of the mainline JDK 8 builds. Internal and external iteration There are two ways to do iteration – internal and external. In external iteration you bring the data to the code, whereas in internal iteration you bring the code to the data. External iteration is what we have today, for example: for (Shape s : shapes) { if (s.getColor() == RED) s.setColor(BLUE); }There are several limitations with this approach. One of them is that the above loop is inherently sequential, even though there is no fundamental reason it couldn’t be executed by multiple threads. Re-written to use internal iteration with lambda, the above code would be: shapes.forEach(s -> { if (s.getColor() == RED) s.setColor(BLUE); }) This is not just a syntactic change, since now the library is in control of how the iteration happens. Written in this way, the code expresses much more what and less how, the how being left to the library. The library authors are free to use parallelism, out-of-order execution, laziness, and all kinds of other techniques. This allows the library to abstract over behavior, which is a fundamentally more powerful way of doing things. Functional Interfaces Project Lambda avoided adding new types, instead reusing existing coding practices. Java programmers are familiar with and have long used interfaces with one method, such as Runnable, Comparator, or ActionListener. Such interfaces are now called functional interfaces. There will be also new functional interfaces, such as Predicate and Block. A lambda expression evaluates to an instance of a functional interface, for example: PredicateisEmpty = s -> s.isEmpty(); Predicate isEmpty = String::isEmpty; Runnable r = () -> { System.out.println(“Boo!”) };So existing libraries are forward-compatible with lambdas, which results in an “automatic upgrade”, maintaining the significant investment in those libraries. Default Methods The above example used a new method on Collection, forEach. However, adding a method to an existing interface is a no-go in Java, as it would result in a runtime exception when a client calls the new method on an old class in which it is not implemented. A default method is an interface method that has an implementation, which is woven-in by the VM at link time. In a sense, this is multiple inheritance, but there’s no reason to panic, since this is multiple inheritance of behavior, not state. The syntax looks like this: interface Collection<T> { ... default void forEach(Block<T> action) { for (T t : this) action.apply(t); } }There are certain inheritance rules to resolve conflicts between multiple supertypes:Rule 1 – prefer superclass methods to interface methods (“Class wins”) Rule 2 – prefer more specific interfaces to less (“Subtype wins”) Rule 3 – otherwise, act as if the method is abstract. In the case of conflicting defaults, the concrete class must provide an implementation.In summary, conflicts are resolved by looking for a unique, most specific default-providing interface. With these rules, “diamonds” are not a problem. In the worst case, when there isn’t a unique most specific implementation of the method, the subclass must provide one, or there will be a compiler error. If this implementation needs to call to one of the inherited implementations, the new syntax for this is A.super.m(). The primary goal of default methods is API evolution, but they are useful as an inheritance mechanism on their own as well. One other way to benefit from them is optional methods. For example, most implementations of Iterator don’t provide a useful remove(), so it can be declared “optional” as follows: interface Iterator<T> { ... default void remove() { throw new UnsupportedOperationException(); } }Bulk operations on collections Bulk operations on collections also enable a map / reduce style of programming. For example, the above code could be further decomposed by getting a stream from the shapes collection, filtering the red elements, and then iterating only over the filtered elements: shapes.stream().filter(s -> s.getColor() == RED).forEach(s -> { s.setColor(BLUE); }); The above code corresponds even more closely to the problem statement of what you actually want to get done. There also other useful bulk operations such as map, into, or sum. The main advantages of this programming model are:More composability Clarity – each stage does one thing The library can use parallelism, out-of-order, laziness for performance, etc.The stream is the basic new abstraction being added to the platform. It encapsulates laziness as a better alternative to “lazy” collections such as LazyList. It is a facility that allows getting a sequence of elements out of it, its source being a collection, array, or a function. The basic programming model with streams is that of a pipeline, such as collection-filter-map-sum or array-map-sorted-forEach. Since streams are lazy, they only compute as elements are needed, which pays off big in cases like filter-map-findFirst. Another advantage of streams is that they allow to take advantage of fork/join parallelism, by having libraries use fork/join behind the scenes to ease programming and avoid boilerplate. Implementation technique In the last part of his talk, Joe described the advantages and disadvantages of the possible implementation techniques for lambda expressions. Different options such as inner classes and method handles were considered, but not accepted due to their shortcomings. The best solution would involve adding a level of indirection, by letting the compiler emit a declarative recipe, rather than imperative code, for creating a lambda, and then letting the runtime execute that recipe however it deems fit (and make sure it’s fast). This sounded like a job for invokedynamic, a new invocation mode introduced with Java SE 7 for an entirely different reason – support for dynamic languages on the JVM. It turned out this feature is not just for dynamic languages any more, as it provides a suitable implementation mechanism for lambdas, and is also much better in terms of performance. Conclusion Project Lambda is a large, coordinated update across the Java language and platform. It enables much more powerful programming model for collections and takes advantage of new features in the VM. You can evaluate these new features by downloading the JDK8 build with lambda support. IDE support is also already available in NetBeans builds with Lambda support and IntelliJ IDEA 12 EAP builds with Lambda support. I already made my own experiences with lambdas in Java in Wordcounter. As I already wrote, I am convinced that this style of programming will quickly become pervasive in Java, so if you don’t yet have experience with it, I do encourage you to try it out.   Reference: Devoxx 2012: Java 8 Lambda and Parallelism, Part 1 from our JCG partner Stoyan Rachev at the Stoyan Rachev’s Blog blog. ...
java-logo

Composing Java annotations

The allowed attribute types of a Java annotations are deliberately very restrictive, however some clean composite annotation types are possible with the allowed types.                     Consider a sample annotation from the tutorial site: package annotation; @interface ClassPreamble { String author(); String[] reviewers(); } Here the author and reviewers are of String and array types which is in keeping with the allowed types of annotation attributes. The following is a comprehensive list of allowed types(as of Java 7):String Class any parameterized invocation of Class an enum type an annotation type, do note that cycles are not allowed, the annotated type cannot refer to itself an array type whose element type is one of the preceding types.Now, to make a richer ClassPreable consider two more annotation types defined this way: package annotation;public @interface Author { String first() default ''; String last() default ''; }package annotation;public @interface Reviewer { String first() default ''; String last() default ''; } With these, the ClassPreamble can be composed from the richer Author and Reviewer annotation types, this way: package annotation; @interface ClassPreamble { Author author(); Reviewer[] reviewers(); } Now an annotation applied on a class looks like this: package annotation;@ClassPreamble(author = @Author(first = 'John', last = 'Doe') , reviewers = {@Reviewer(first = 'first1', last = 'last1'), @Reviewer(last = 'last2') } ) public class MyClass { .... } This is a contrived example just to demonstrate composition of annotations, however this approach is used extensively for real world annotations, for eg, to define a many to many relationship between two JPA entities: @ManyToMany @JoinTable(name='Employee_Project', joinColumns=@JoinColumn(name='Employee_ID'), inverseJoinColumns=@JoinColumn(name='Project_ID')) private Collection<Project> projects;   Reference: Composing Java annotations from our JCG partner Biju Kunjummen at the all and sundry blog. ...
apache-tomcat-logo

Death by Redirect

It is said that the greatest harm can come from the best intentions. We recently had a case where, because of the best intentions, two @#@&*@!!^@ parties killed our servers with a single request, causing a deadlock involving all of our Tomcat instances, including all HTTP threads. Naturally, not a pleasant situation to find yourself in.   Some explanation about our setup is necessary here. We have a number of Tomcat instances that serve HTML pages for our website, located behind a stateless Load Balancer. The time then came when we added a second application, deployed on Jetty.     Since we needed the new app to be served as part of the same website (e.g. http://www.wix.com/jetty-app), we proxied the second (Jetty) application from Tomcat (Don’t dig into why we proxied Jetty from Tomcat, we thought at the time we had good reasons for it).So in fact we had the following architecture:At the Tomcat end, we were using the Apache HttpClient library to connect to the Jetty application. HttpClient by default is configured to follow redirects. Best Intentions #1: Why should we require the developer to think about redirects? Let’s handle them automatically for her… At the Jetty end, we had a generic error handler that on an error, instead of showing an error page, redirected the user to the homepage of the app on Jetty. Best Intentions #2: Why show the user an error page? Let’s redirect him to our homepage… But what happens when the homepage of the Jetty application generates an error? Well, apparently it returns a redirect directive to itself! Now, if a browser would have gotten that redirect, it would have entered a redirect loop and break it after about 20 redirects. We would have seen 20 requests all resulting in a redirect, probably seen a traffic spike, but nothing else. However, because we had redirects turned on at the HttpClient library, what happened is the following:A Request arrives to our Tomcat server, which resolves it to be proxied to the Jetty application Tomcat Thread #1 proxies a request to Jetty Jetty has an exception and returns a redirect to http://www.wix.com/jetty -app Tomcat Thread #1 connects to the www.wix.com host, which goes via the load balancer and ends at another Tomcat thread – Tomcat Thread #2 Tomcat Thread #2 proxies a request to Jetty Jetty has an exception and returns a redirect to http://www.wix.com/jetty Tomcat Thread #1 connects to the www.wix.com host, which goes via the load balancer and ends at another Tomcat thread – Tomcat Thread #3 And so on, until all threads on all Tomcats are all stuck on the same one requestSo, what can we learn from this incident? We can learn that the defaults of Apache HttpClient are not necessarily the ones you’d expect. We can learn that if you issue a redirect, make sure you are not redirecting to yourself (like our Jetty application homepage). We can learn that the HTTP protocol, which is considered a commodity can be complicated at times and hard to tune, and that not every developer knows to perform an HTTP request. We can also learn that when you take on a 3rd party library, you should invest time in learning to use it, to understand the failure points and how to overcome them. However, there is a deeper message here. When we develop software, we trade development velocity and risk. The faster we want to develop software, the more we need to trust the individual developers. The more trust we give developers, the more risk we gain by developer black spots – things a developer fails to think about, e.g. handling redirect. As a software engineer, I am not sure there is a simple solution to this issue – I guess it is up to you.   Reference: Death by Redirect from our JCG partner Yoav Abrahami at the Wix IO blog. ...
apache-tomcat-logo

Tomcat Clustering Series Part 4 : Session Replication using Backup Manager

Hi, this is my fourth part of the Tomcat Clustering Series. In this post we are going to discuss how to setup session replication using Backup Manager in tomcat clustering environment. Session replication makes High availability and full fail-over capability to our clustering environment.   [Check the video below for better understanding] This post is a continuation of my last post (Session replication using Delta Manager). In delta manager each tomcat instance needs to replicate the session information to all other tomcat instances. It takes more time and replication if our cluster size is increased, so, an alternative manager is there, the Backup Manager.   Backup Manager replicates the copy of session data to exactly one other tomcat instance in the cluster. This is the main difference between Delta and Backup managers. Here one tomcat instance maintains what is the primary copy of the session whereas another tomcat instance holds the replicated session data acting as the backup one. If any one of the tomcat instances fails the other one serves the session. That way fail over capability is achieved. The setup process of backup manager is same as Delta manager. Except we need to mention the Manager as BacupManager (org.apache.catalina.ha.session.DeltaManager) inside <Cluster> element. Suppose we have 3 tomcat instances like previous post, and i configured into backup manager. Now user try access the page. User request comes to load balancer, and load balancer redirect the request to suppose tomcat1. Now tomcat one create the session, now tomcat1 is responsible to replicate exactly one copy to any one of the tomcat. So tomcat1 picks any tomcat which is part of the cluster (multicast). Here tomcat1 picks tomcat3 as a backup. So tomcat3 hold the backup copy of the session. We are running the load balancer in sticky session mode so all further request from that particular user is redirect to tomcat1 only. All modification in tomcat1 is replicated to tomcat3. Now tomcat1 is crashed/shutdown for some reasonNow same user try to access the page. This time load balancer tries to redirect to tomcat1 but tomcat1 is down, so load-balancer picks one tomcat from the remaining tomcats. Here interestingly 2 cases are there. Case 1: Suppose Load balancer picks the tomcat3, then tomcat3 receives the request and tomcat3 itself holds the backup copy of the session. So tomcat3 makes that session as primary copy and tomcat3 picks any one tomcat as backup copy. So here remaining only one tomcat is there. So tomcat3 replicates the session to tomcat2. Now tomcat3 holds primary copy and tomcat2 holds the backup copy. Now tomcat3 gives the response to user. All further request is handled by tomcat3 (sticky session). case 2: Suppose Load balancer picks the tomcat2 then tomcat2 receives the request and tomcat2 don’t have the session. So tomcat2 session manager (Backup Manager) asks all other tomcat managers: ‘hi anybody hold the session for this user (based on session id [cookie])’. Actually tomcat3 has the backup session. So tomcat3 informs to tomcat2 and replicate the session to tomcat2. Now tomcat2 makes that session as primary copy and tomcat3 whose already have copy of session as remains as a backup copy of that session, so now tomcat2 hold primary copy and tomcat3 hold the backup copy. Now tomcat2 give the response to user. All further request is handled by tomcat2 (sticky session). So in either case our session is replicated and maintained by backup manager. This is good for large cluster. Check the video below Check my configuration in my github repo or get as ZIP file Screen Cast:  Reference: Tomcat Clustering Series Part 4 : Session Replication using Backup Manager from our JCG partner Rama Krishnan at the Ramki Java Blog blog. ...
software-development-2-logo

Rule of 30 – When is a method, class or subsystem too big?

A question that constantly comes up from people that care about writing good code, is: what’s the right size for a method or function, or a class, or a package or any other chunk of code?   At some point any piece of code can be too big to understand properly – but how big is too big? It starts at the method or function level.             In Code Complete, Steve McConnell says that the theoretical best maximum limit for a method or function is the number of lines that can fit on one screen (i.e., that a developer can see at one time). He then goes on to reference studies from the 1980s and 1990s which found that the sweet spot for functions is somewhere between 65 lines and 200 lines: routines this size are cheaper to develop and have fewer errors per line of code. However, at some point beyond 200 lines you cross into a danger zone where code quality and understandability will fall apart: code that can’t be tested and can’t be changed safely. Eventually you end up with what Michael Feathers calls “runaway methods”: routines that are several hundreds or thousands of lines long and that are constantly being changed and that continuously get bigger and scarier. Patrick Duboy looks deeper into this analysis on method length, and points to a more modern study from 2002 that shows that code with shorter routines has fewer defects overall, which matches with most people’s intuition and experience. Smaller must be better Bob Martin takes the idea that “if small is good, then smaller must be better” to an extreme in Clean Code: “The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. Functions should not be 100 lines long. Functions should hardly ever be 20 lines long.” Martin admits that “This is not an assertion that I can justify. I can’t produce any references to research that shows that very small functions are better.” So like many other rules or best practices in the software development community, this is a qualitative judgement made by someone based on their personal experience writing code – more of an aesthetic argument – or even an ethical one – than an empirical one. Style over substance. The same “small is better” guidance applies to classes, packages and subsystems – all of the building blocks of a system. In Code Complete, a study from 1996 found that classes with more routines had more defects. Like functions, according to Clean Code, classes should also be “smaller than small”. Some people recommend that 200 lines is a good limit for a class – not a method, or as few as 50-60 lines (in Ben Nadel’s Object Calisthenics exercise)and that a class should consist of “less than 10” or “not more than 20” methods. The famous C3 project – where Extreme Programming was born – had 12 methods per class on average. And there should be no more than 10 classes per package. PMD, a static analysis tool that helps to highlight problems in code structure and style, defines some default values for code size limits: 100 lines per method, 1000 lines per class, and 10 methods in a class. Checkstyle, a similar tool, suggests different limits: 50 lines in a method, 1500 lines in a class. Rule of 30 Looking for guidelines like this led me to the “Rule of 30” in Refactoring in Large Software Projects by Martin Lippert and Stephen Roock:   “If an element consists of more than 30 subelements, it is highly probable that there is a serious problem”:Methods should not have more than an average of 30 code lines (not counting line spaces and comments). A class should contain an average of less than 30 methods, resulting in up to 900 lines of code. A package shouldn’t contain more than 30 classes, thus comprising up to 27,000 code lines. Subsystems with more than 30 packages should be avoided. Such a subsystem would count up to 900 classes with up to 810,000 lines of code. A system with 30 subsystems would thus possess 27,000 classes and 24.3 million code lines.What does this look like? Take a biggish system of 1 million NCLOC. This should break down into:30,000+ methods 1,000+ classes 30+ packages Hopefully more than 1 subsystemHow many systems in the real world look like this, or close to this – especially big systems that have been around for a few years? Are these rules useful? How should you use them? Using code size as the basis for rules like this is simple: easy to see and understand. Too simple, many people would argue: a better indicator of when code is too big is cyclomatic complexity or some other measure of code quality. But some recent studies show that code size actually is a strong predictor of complexity and quality – that“complexity metrics are highly correlated with lines of code, and therefore the more complex metrics provide no further information that could not be measured simplify with lines of code”. In ‘Beyond Lines of Code: Do we Need more Complexity Metrics’ in Making Software, the authors go so far as to say that lines of code should be considered always as the ‘first and only metric’ for defect prediction, development and maintenance models. Recognizing that simple sizing rules are arbitrary, should you use them, and if so how? I like the idea of rough and easy-to-understand rules of thumb that you can keep in the back of your mind when writing code or looking at code and deciding whether it should be refactored. The real value of a guideline like the Rule of 30 is when you’re reviewing code and identifying risks and costs. But enforcing these rules in a heavy handed way on every piece of code as it is being written is foolish. You don’t want to stop when you’re about to write the 31st line in a method – it would slow down work to a crawl. And forcing everyone to break code up to fit arbitrary size limits will make the code worse, not better – the structure will be dominated by short-term decisions. As Jeff Langer points out in his chapter discussing Ken Beck’s four rules of Simple Design in Clean Code:“Our goal is to keep our overall system small while we are also keeping our functions and classes small. Remember however that this rule is the lowest priority of the four rules of Simple Design. So, although it’s important to keep class and function count low, it’s more important to have tests, eliminate duplication, and express yourself.”   Sometimes it will take more than 30 lines (or 20 or 5 or whatever the cut-off is) to get a coherent piece of work done. It’s more important to be careful in coming up with the right abstractions and algorithms and to write clean clear code – if a cut-off guideline on size helps to do that, use it. If it doesn’t, then don’t bother.   Reference: Rule of 30 – When is a method, class or subsystem too big? from our JCG partner Jim Bird at the Building Real Software blog. ...
apache-tomcat-logo

Securing your Tomcat app with SSL and Spring Security

If you’ve seen my last blog, you’ll know that I listed ten things that you can do with Spring Security. However, before you start using Spring Security in earnest one of the first things you really must do is to ensure that your web app uses the right transport protocol, which in this case is HTTPS – after all there’s no point in having a secure web site if you’re going to broadcast your user’s passwords all over the internet in plain text. To setup SSL there are three basic steps…           Creating a Key Store The first thing you need is a private keystore containing a valid certificate and the simplest way to generate one of these is to use Java’s keytool utility located in the $JAVA_HOME/bin directory. keytool -genkey -alias MyKeyAlias -keyalg RSA -keystore /Users/Roger/tmp/roger.keystore In the above example,-alias is the unique identifier for your key. -keyalg is the algorithm used to generate the key. Most examples you find on the web usually cite ‘RSA’, but you could also use ‘DSA’ or ‘DES’ -keystore is an optional argument specifying the location of your key store file. If this argument is missing then the default location is your $HOME directory.RSA stands for Ron Rivest (also the creator of the RC4 algorithm), Adi Shamir and Leonard Adleman DSA stands for Digital Signature Algorithm DES stands for Data Encryption Standard For more information on keytool and its arguments take a look at this Informit article by Jon Svede When you run this program you’ll be asked a few questions: Roger$ keytool -genkey -alias MyKeyAlias -keyalg RSA -keystore /Users/Roger/tmp/roger.keystore Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: localhost What is the name of your organizational unit? [Unknown]: MyDepartmentName What is the name of your organization? [Unknown]: MyCompanyName What is the name of your City or Locality? [Unknown]: Stafford What is the name of your State or Province? [Unknown]: NA What is the two-letter country code for this unit? [Unknown]: UK Is CN=localhost, OU=MyDepartmentName, O=MyCompanyName, L=Stafford, ST=UK, C=UK correct? [no]: YEnter key password for(RETURN if same as keystore password): Most of the fields are self explanatory; however for the first and second name values, I generally use the machine name – in this case localhost. Updating the Tomcat Configuration The second step in securing your app is to ensure that your tomcat has an SSL connector. To do this you need to find tomcat’s server.xml configuration file, which is usually located in the 'conf' directory. Once you’ve got hold of this and if you’re using tomcat, then it’s a matter of uncommenting: <Connector port='8443' protocol='HTTP/1.1' SSLEnabled='true' maxThreads='150' scheme='https' secure='true' clientAuth='false' sslProtocol='TLS' /> …and making it look something like this: <Connector SSLEnabled='true' keystoreFile='/Users/Roger/tmp/roger.keystore' keystorePass='password' port='8443' scheme='https' secure='true' sslProtocol='TLS'/> Note that the password ‘password’ is in plain text, which isn’t very secure. There are ways around this, but that’s beyond the scope of this blog. If you’re using Spring’s tcServer, then you’ll find that it already has a SSL connector that’s configured something like this: <Connector SSLEnabled='true' acceptCount='100' connectionTimeout='20000' executor='tomcatThreadPool' keyAlias='tcserver' keystoreFile='${catalina.base}/conf/tcserver.keystore' keystorePass='changeme' maxKeepAliveRequests='15' port='${bio-ssl.https.port}' protocol='org.apache.coyote.http11.Http11Protocol' redirectPort='${bio-ssl.https.port}' scheme='https' secure='true'/> …in which case it’s just a matter of editing the various fields including keyAlias, keystoreFile and keystorePass. Configuring your App If you now start tomcat and run your web application, you’ll now find that it’s accessible using HTTPS. For example typing https://localhost:8443/my-app will work, but so will http://localhost:8080/my-app This means that you also need to do some jiggery-pokery on your app to ensure that it only responds to HTTPS and there are two approaches you can take. If you’re not using Spring Security, then you can simply add the following to yourweb.xml before the last web-app tag: <security-constraint> <web-resource-collection> <web-resource-name>my-secure-app</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> If you are using Spring Security, then there are a few more steps to getting things going. Part of the general Spring Security setup is to add the following to your web.xml file. Firstly you need to add a Spring Security application context file to the contextConfigLocation context-param: <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/spring/root-context.xml /WEB-INF/spring/appServlet/application-security.xml </param-value> </context-param> Secondly, you need to add the Spring Security filter and filter-mapping: <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> Lastly, you need to create, or edit, your application-security.xml as shown in the very minimalistic example below: <?xml version='1.0' encoding='UTF-8'?> <beans:beans xmlns='http://www.springframework.org/schema/security' xmlns:beans='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/securityhttp://www.springframework.org/schema/security/spring-security-3.1.xsd'><http auto-config='true' > <intercept-url pattern='/**' requires-channel='https' /> </http><authentication-manager> </authentication-manager></beans:beans> In the example above intercept-url element has been set up intercept all URLs and force them to use the https channel. The configuration details above may give the impression that it’s quicker to use the simple web.xml config change, but if you’re already using Spring Security, then it’s only a matter of adding a requires-channel attribute to your existing configuration. A sample app called tomcat-ssl demonstrating the above is available on git hub at: https://github.com/roghughe/captaindebug   Reference: Securing your Tomcat app with SSL and Spring Security from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close