Featured FREE Whitepapers

What's New Here?


The Wrong Notion of Time

No one wakes up in the morning and say “Today I’m gonna screw up. Today I’m gonna piss my boss and all my team mates off writing the worst code I could possibly write”. Well, there are always exceptions but normally no one does that. If that is the case, how come Agile projects are now failing? How come do we still have the same old problems?           A Technical Debt Story Some time ago I was in this project and one of the developers chose to work on a brand new feature. For the implementation of this new feature, we did not need to touch any of our existing code, besides very few things just to wire the new feature into the application. After a day or so, I offered to pair with him. Naturally, since I had just joined him, I asked him to give me an overview of what the feature was about. He promptly explained it to me and I asked him to show me where he was so we could continue. After he finished showing the code to me I made a few observations since it was not clear to me that his code was reflecting what needed to be done – according to his previous explanation. Basically, the language the he used to explain me the business feature was not in sync with the language that he had used in the code and I could also see some code that was not really necessary for the implementation of that feature. I also noticed that there were no tests. When I asked him about that he said _It is working now and I may need that extra code in the future. Let’s add this refactoring you are proposing and the unit test to the technical debt backlog. I need to finish this feature. How crazy is that? That was a brand new feature. We should be reducing technical debt as we went along instead of adding more of it. However, this developer somehow felt that it was OK to do that. At the end of the day we had a technical debt backlog, didn’t we? That was supposedly an Agile team with experienced developers but somehow, in their minds, it was OK to have this behaviour. Perhaps one day someone would look at the technical debt and do something about. Possibly. Maybe. Quite unlikely. Nah, will never gonna happen. But we want to do the right thing But we all want to do the right thing. We do not do these things on purpose. However, over the time, I realised that we developers have a wrong notion of time. We think we need to rush all the time to deliver the tasks we committed to. Pressure will always be part of a software developer life and when there is pressure, we end up cutting corners. We do not do that because we are sloppy. We normally do that because we feel that we need to go faster. We feel that we are doing a good job, proving the business the features they want as fast as we can. The problem is that not always we understand the implications of our own decisions. A busy team with no spare time I joined this team in a very large organisation. There were loads of pressure and the developers were working really hard. First, it took me days to get my machine set up. The project was a nightmare to configure in our IDEs. We were using Java and I was trying to get my Eclipse to import the project. The project had more than 70 maven projects and modules, with loads of circular dependencies. After a few days, I had my local environment set. The project was using a heavyweight JEE Container, loads of queues and had to integrate with many other internal systems. When pairing with one of the guys (pairing was not common there but I asked them if could pair with them) I noticed that he was playing messages in a queue and looking at logs. I asked him what he was doing and he said that it was not possible to run the system locally so he had to add loads of logs to the code, package, deploy the application in the UAT environment, play XML messages into one of the inbound queues, look at the logs in the console and try to figure out what the application was doing. Apparently he had made a change and the expected message was not arriving in the expected outbound queue. So, after almost twenty minutes of changing the XML message and replaying it into the inbound queue, he had an idea of what the problem could be. So he went back to his local machine, changed a few lines of code, added more logs, changed a few existing ones to print out more information and started building the application again. At this point I asked if he would not write tests for the change and if he had tests for the rest of the application. He then told me that the task he was working on was important so he had to finish it quickly and did not have time to write tests. Then he deployed the new version of the application in UAT again (note that no one else could use the UAT environment while he was doing his tests), played an XML message into the inbound queue and started looking at the logs again. That went on for another two days until the problem was fixed. It turned out that there were some minor logical bugs in the code, things that a unit test would have caught immediately. We don’t have time but apparently someone else does Imagine the situation above. Imagine an application with a few hundred thousand lines. Now imagine a team of seven developers. Now imagine ten of those teams in five different countries working on the same application. Yes, that was the case. There were some system tests (black box tests) but they used to take four hours to run and quite often they were broken so no one really paid attention to them. Can you imagine the amount of time wasted per developer per task or story. Let’s not forget the QA team because apparently testers have all the time in the world. They had to manually test the entire system for every single change in the system. Every new feature added to the system was, of course, making the system bigger causing the system tests to be even slower and QA cycles even longer. Debug time was also getting bigger since each developer was adding more code that all the others would need to debug to understand how things work. Now thing about all the time wasted here, every day, every week, every month. This is all because we developers do not have time. Dedicated Quality Assurance teams are an anti-pattern. Testers should find nothing, zero, nada. Every time a tester finds a bug, we developers should feel bad about it. Every bug found in production is an indication of something that we have not done. Some bugs are related to bad requirements but even then we should have done something about that. Maybe we should have helped our BAs or product owners to clarify them. By no means I am saying that we should not have testers. They can be extremely valuable to explore our applications in unexpected ways that just a human could do. They should not waste their time executing test plans that could be automated by the development team. Business want the features as soon as possible and we feel that it is our obligation to satisfy them – and it is. However, business people look at the system as a whole and so should we. They look at everything and not just the story we are working on. It is our job to remove (automate) all the repetitive work. I still remember, back in the 90ies, when debugging skills was a big criteria in technical interviews. Those days are gone. Although it is important to have debugging skills, we should be unpleasantly surprised whenever we need to resort to it and when it occurs, we need to immediately address that, writing tests and refactoring our code so we never need to do that again. Using time wisely Our clients and/or employers are interested in software that satisfy their needs, that works as specified and is easy to change whenever they change their minds. It is our job to provide that to them. The way we go about satisfying their expectations, normally, it is up to us. Although they may mention things like automated testing and Agile methodologies, what they really want is a good value for their money when it comes to the investment that they are making in a software project. We need to use our (their) time wisely, automating whatever we can – being tests or deployment procedures – instead of thinking that we may not have time to do it. We can always quantify how much time we are spending in repetitive tasks and even get to the extend of showing them how much time is being spent over a period of time in those activities. Before implementing any new feature or task, we should spend some time preparing our system to accept the changes in a nice way, so we can just _slide that in_ with no friction, and making sure that whatever we write can be easily tested and deployed. When estimating our work, we should always take this into account as **part of the time** that will take us to do it instead of having the false impression that we will be going faster if we treat them as a separate task since, chances are, they may never get done and the whole time will be slowed down because of that. The less time we waste manually testing (or waiting for a long automated test suite to run), debugging, dealing with a huge amount of technical debt, trying to get your IDE to work nicely with your fucked up project structure, or fighting to deploy the application, the more time we have to look after the quality of our application and make our clients happy. Note: The teams I mentioned above after a lot of hard work, commitment, support from management and a significant amount of investment (time and money) managed to turn things around and are now among the best teams in the organisation. Some of the teams managed to replace (re-write) an unreliable in-house test suite that used to take over three hours to run with a far more reliable one that takes around 20 minutes. One of the teams is very close to achieve a “one-button” deployment and has an extensive test suite with tests ranging from unit to system (black box) that run in minutes and with code coverage close to 100%.   Reference: The Wrong Notion of Time from our JCG partner Sandro Mancuso at the Crafted Software blog. ...

Do you get Just-in-time compilation?

Remember the last time when you were laughed at by C-developers? That the Java is so slooooow that they would never even consider using a language like this? In many ways, the concept still holds. But for it’s typical usage – in the backbones of a large enterprise – Java performance can definitely stand against many contestants. And this is possible mostly thanks to the magical JIT. Before jumping into explaining Just-In-Time compilation tricks, lets dig into background a bit.       As you might remember – Java is an interpreted language. Java compiler known by most users, javac, does not compile java source files directly into processor instructions like C compilers do. Instead it produces bytecode, machine independent binary format governed by specification. This bytecode is interpreted during runtime by JVM.This is the main reason why Java is so successful in cross-platform – you can write and build the program in one platform and run it on several others.On the other hand – it does introduce some negative aspects. Out of which one of the most severe is the fact that interpreted code is usually slower than code compiled directly to platform-specific native binaries. Sun realized the severity already at the end of the nineties, when it hired dr Cliff Click to provide a solution. Welcome – HotSpot. The name derives from the ability of JVM to identify “hot spots” in your application’s – chunks of bytecode that are frequently executed. They are then targeted for the extensive optimization and compilation into processor specific instructions. The optimizations lead to high performance execution with a minimum of overhead for less performance-critical code. In some cases, it is possible for adaptive optimization of a JVM to exceed the performance of hand-coded C++ or C code. The component in JVM responsible for those optimizations is called Just in Time compiler (JIT). It takes advantage of an interesting program property. Virtually all programs spend the majority of their time executing a minority of their code. Rather than compiling all of your code, just in time, the Java HotSpot VM immediately runs the program using an interpreter, and analyzes the code as it runs to detect the critical hot spots in the program. Then it focuses the attention of a global native-code optimizer on the hot spots. By avoiding compilation of infrequently executed code, the Java HotSpot compiler can devote more attention to the performance-critical parts of the program. This means that your compilation time does not increase overall. This hot spot monitoring is continued dynamically as the program runs, so that it adapts its performance on the fly according to the usage patterns of your application. JIT achieves the performance benefits by several techniques, such as eliminating dead code, bypassing boundary condition checks, removing redundant loads, inlining methods, etc. Following samples illustrates those techniques used by JIT to achieve better performance. In the first section there is the code written by a developer. In the second code snippet is the code executed after hotspot has detected it to be “hot” and applied it’s optimization magic:Unoptimized code. class Calculator { Wrapper wrapper; public void calculate() { y = wrapper.get(); z = wrapper.get(); sum = y + z; } } class Wrapper { final int value; final int get() { return value; } } Optimized code class Calculator { Wrapper wrapper; public void calculate() { y = wrapper.value; sum = y + y; } } class Wrapper { final int value; final int get() { return value; } }First class described in the small sample above is a class a developer has written and the second is a sample after JIT has finished it’s work. The sample contains several optimization techniques applied. Lets try to look how the final result is achieved:Unoptimized code. This is the code being run before it is detected as a hot spot: public void calculate() { y = wrapper.get(); z = wrapper.get(); sum = y + z; } Inlining a method. wrapper.get() has been replaced by b.value as latencies are reduced by accessing wrapper.value directly instead of through a function call. public void calculate() { y = wrapper.value; z = wrapper.value; sum = y + z; } Removing redundant loads. z = wrapper.value has been replaced with z = y so that latencies will be reduced by accessing the local value instead of wrapper.value. public void calculate() { y = wrapper.value; z = y; sum = y + z; } Copy propagation. z = y has been replaced by y = y since there is no use for an extra variable z as the value of z and y will be equal. public void calculate() { y = wrapper.value; y = y; sum = y + y; } Eliminating dead code. y = y is unnecessary and can be eliminated. public void calculate() { y = wrapper.value; sum = y + y; }The small sample contains several powerful techniques used by JIT to increase performance of the code. Hopefully it proved beneficial in understanding this powerful concept. Enjoyed the post? We have a lot more under our belt. Subscribe to either our RSS feed or Twitter stream and enjoy. The following related links were used for this article (besides two angry C developers):http://www.oracle.com/technetwork/java/whitepaper-135217.html http://www.oracle.com/technetwork/java/javase/tech/index-jsp-136373.html http://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/geninfo/diagnos/underst_jit.html  Reference: Do you get Just-in-time compilation? from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog. ...

Getting started with Apache Camel

In a previous blog-post we got to know about enterprise integration patterns (EIPs). Now in this post we will look into Apache Camel framework that realizes those patterns.                 About Camel: Apache Camel is an open source project which is almost 5 years old and has a large community of users. At the heart of the framework is an engine which does the job of mediation and routes messages from one system to another. At the periphery it has got a plethora of components that allows interfacing with systems using various protocols (e.g. FTP(s), RPC, Webservices, HTTP, JMS, REST etc). Also it provides easy to understand domain specific language in Java, Spring and Scala. Now let’s get started with Apache camel. We will set up a project using maven, add dependencies for required camel libraries and write our example using both Java and Spring DSL. Consider a system that accepts two types of orders; widget and gadgets. The orders arrive on a JMS queue and are specified in XML format. The gadget inventory polls a file directory for incoming orders while the widget inventory listens on a queue. We run XPath on all arriving orders and figure out whether they belong to the widget or gadget inventory. Our use case is depicted by the following diagram:To get started, just open a command line window in a directory and type mvn archetype:generate "c:\myprojects>mvn archetype:generate assuming we have versions maven 2+ and jdk 1.6 in our path, also to run this example we need an activemq broker. We will add the following dependencies in pom org.apache.camel : camel-core : 2.10.1 - Lib containing Camel engineorg.apache.camel : camel-ftp : 2.10.1 - Camel's ftp componentorg.apache.activemq : activemq-camel : 5.6.0 org.apache.activemq : activemq-pool : 5.6.0 - Libs required to integrate camel with activemqlog4j : log4j : 1.2.16 org.slf4j : slf4j-log4j12 : 1.6.4 - Libs for logging Complete pom.xml is pasted on this gist entry. Now lets code our camel route that shall poll a JMS queue, apply XPath to figure out whether the order is for gadget inventory or widget inventory and subsequently route it to FTP directory or a JMS queue. Orders arriving in our system are having the below structure <xml version="1.0" encoding="UTF-8"> <order> <product>gadget</product> <lineitems> <item>cdplayer</item> <qty>2</qty> </lineitems> <lineitems> <item>ipod-nano</item> <qty>1</qty> </lineitems> </order> Value of product element specifies whether is of gadget or it is a widget order. So applying below XPath on the orders shall let us decide where to route this message./order/product =’gadget’ then forward to an FTP directory else forward to a queue. Now lets code the route, in order to do so one needs to extend the RouteBuilder (org.apache.camel.builder.RouteBuilder) class and override it’s configure method. We will name our class as JavaDSLMain and put the following code in its configure method: from("activemq:queue:NewOrders?brokerURL=tcp://") .choice().when(xpath("/order/product = 'gadget'")) .to("activemq:queue:GadgetOrders?brokerURL=tcp://") .otherwise() .to(""); Having done that, now lets analyze the above route. Keywords above form the Camel EIP DSL; the intent of this route is summarized as follows : from : this says that get messages from an endpoint i.e consume, in our case this happens to be a queue. choice : this is a predicate, here we apply a simple rule. xpath: this says apply an xpath to the current message, the outcome of the xpath is a boolean. to : this tells camel to place the message at an endpoint i.e. produce. Each of this keyword may take some parameters to work. For example the from takes the endpoint parameter from which to consume messages, in our case it is a queue on a JMS (activemq) broker. Notice that Camel automatically does type conversion for you, in the above route the message object is converted to a DOM before applying the XPath. We will also put the main method in this class itself to quickly test this up. Inside the main method we need to instantiate a Camel context which shall host this route and on starting the context, Camel shall set up the route and start listening on the NewOrders queue. The code that goes in the main method is as follows : CamelContext camelContext = new DefaultCamelContext(); camelContext.addRoutes(new JavaDSLMain()); camelContext.start(); /* wait indefinitely */ Object obj = new Object(); synchronized (obj) { obj.wait(); } View this gist entry to get the complete code listing. Another way to use Camel is with Spring, Camel routes goes into spring application context file. Instead of writing Java code all we use is XML to quickly define the routes. For doing this we need to import Camel namespaces in Spring context file and using an IDE such as Spring tool suite one can quickly build & write integration applications. Check this gist entry for the Spring application context demonstrating a Camel route.Place this context file inside META-INF/spring folder i.e in our maven project it goes under /src/main/resources/META-INF/spring folder. At the top we have imported Camel’s Spring namespace which allows defining Camel route inside Spring’s application context , additionally in our pom file we need to add dependency to include Spring bean’s that takes care of recognizing and instantiating Camel engine inside Spring. Add below to include the Camel support for Spring. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>2.10.1</version> </dependency> Camel provides a helper class (org.apache.camel.spring.Main) that scans for all Spring application context files kept under META-INF/spring folder kept under classpath. Check this gist entry showing the required code. With this example we have realized the Content Based Router pattern that inspects content of the message for routing decisions.   Reference: Getting started with Apache Camel from our JCG partner Abhishek Jain at the NS.Infra blog. ...

Decorate with decorator design pattern

Decorator pattern is one of the widely used structural patterns. This pattern dynamically changes the functionality of an object at runtime without impacting the existing functionality of the objects. In short this pattern adds additional functionalities to the object by wrapping it. Problem statement: Imagine a scenario where we have a pizza which is already baked with tomato and cheese. After that you just recall that you need to put some additional topping at customer’s choice. So you will need to give some additional toppings like chicken and pepper on the go. Intent: Add or remove additional functionalities or responsibilities from the object dynamically without impacting the original object. At times it is required when addition of functionalities is not possible by subclassing as it might create loads of subclasses. Solution: So in this case we are not using inheritance to add additional functionalities to the object i.e. pizza, instead we are using composition. This pattern is useful when we don’t want to use inheritance and rather use composition. StructureFollowing are the participants of the Decorator Design pattern:Component – this is the wrapper which can have additional responsibilities associated with it at runtime. Concrete component- is the original object to which the additional functionalities are added. Decorator-this is an abstract class which contains a reference to the component object and also implements the component interface. Concrete decorator-they extend the decorator and builds additional functionality on top of the Component class.Example:In the above example the Pizza class acts as the Component and BasicPizza is the concrete component which needs to be decorated. The PizzaDecorator acts as a Decorator abstract class which contains a reference to the Pizza class. The ChickenTikkaPizza is the ConcreteDecorator which builds additional functionality to the Pizza class. Let’s summarize the steps to implement the decorator design pattern:Create an interface to the BasicPizza(Concrete Component) that we want to decorate. Create an abstract class PizzaDecorator that contains reference field of Pizza(decorated) interface. Note: The decorator(PizzaDecorator) must extend same decorated(Pizza) interface. We will need to now pass the Pizza object that you want to decorate in the constructor of decorator. Let us create Concrete Decorator(ChickenTikkaPizza) which should provide additional functionalities of additional topping. The Concrete Decorator(ChickenTikkaPizza) should extend the PizzaDecorator abstract class. Redirect methods of decorator (bakePizza()) to decorated class’s core implementation. Override methods(bakePizza()) where you need to change behavior e.g. addition of the Chicken Tikka topping. Let the client class create the Component type (Pizza) object by creating a Concrete Decorator(ChickenTikkaPizza) with help from Concrete Component(BasicPizza). To remember in short : New Component = Concrete Component + Concrete DecoratorPizza pizza = new ChickenTikkaPizza(new BasicPizza()); Code Example: BasicPizza.java public String bakePizza() { return 'Basic Pizza'; } Pizza.java public interface Pizza { public String bakePizza(); } PizzaDecorator.java public abstract class PizzaDecorator implements Pizza { Pizza pizza; public PizzaDecorator(Pizza newPizza) { this.pizza = newPizza; } @Override public String bakePizza() { return pizza.bakePizza(); } } ChickenTikkaPizza.java public class ChickenTikkaPizza extends PizzaDecorator { public ChickenTikkaPizza(Pizza newPizza) { super(newPizza); } public String bakePizza() { return pizza.bakePizza() + ' with Chicken topping added'; } } Client.java public static void main(String[] args) { Pizza pizza = new ChickenTikkaPizza(new BasicPizza()); System.out.println(pizza.bakePizza());} Benefits: Decorator design pattern provide more flexibility than the standard inheritance. Inheritance also extends the parent class responsibility but in a static manner. However decorator allows doing this in dynamic fashion. Drawback: Code debugging might be difficult since this pattern adds functionality at runtime. Interesting points:Adapter pattern plugs different interfaces together whereas decorator pattern enhances the functionality of the object. Unlike Decorator Pattern the Strategy pattern changes the original object without wrapping it. While Proxy pattern controls access to the object the decorator pattern enhances the functionality of the object.Both Composite and Decorator pattern uses the same tree structure but there are subtle differences between both of them. We can use composite pattern when we need to keep the group of objects having similar behavior inside another object. However decorator pattern is used when we need to modify the functionality of the object at runtime. There are various live examples of decorator pattern in Java API.java.io.BufferedReader; java.io.FileReader; java.io.Reader;If we see the constructor of the BufferedReader then we can see that the BufferedReader wraps the Reader class by adding more features e.g. readLine() which is not present in the reader class. We can use the same format like the above example on how the client uses the decorator pattern new BufferedReader(new FileReader(new File(“File1.txt”))); Similarly the BufferedInputStream is a decorator for the decorated object FileInputStream. BufferedInputStream bs = new BufferedInputStream(new FileInputStream(new File(“File1.txt”)));   Reference: Gang of Four – Decorate with decorator design pattern from our JCG partner Mainak Goswami at the Idiotechie blog. ...

Changes to String.substring in Java 7

It is common knowledge that Java optimizes the substring operation for the case where you generate a lot of substrings of the same source string. It does this by using the (value, offset, count) way of storing the information. See an example below:In the above diagram you see the strings ‘Hello’ and ‘World!’ derived from ‘Hello World!’ and the way they are represented in the heap: there is one character array containing ‘Hello World!’ and two references to it. This method of storage is advantageous in some cases, for example for a compiler which tokenizes source files. In other instances it may lead you to an OutOfMemorError (if you are routinely reading long strings and only keeping a small part of it – but the above mechanism prevents the GC from collecting the original String buffer). Some even call it a bug. I wouldn’t go so far, but it’s certainly a leaky abstraction because you were forced to do the following to ensure that a copy was made: new String(str.substring(5, 6)).This all changed in May of 2012 or Java 7u6. The pendulum is swung back and now full copies are made by default. What does this mean for you?For most probably it is just a nice piece of Java trivia If you are writing parsers and such, you can not rely any more on the implicit caching provided by String. You will need to implement a similar mechanism based on buffering and a custom implementation of CharSequence If you were doing new String(str.substring) to force a copy of the character buffer, you can stop as soon as you update to the latest Java 7 (and you need to do that quite soon since Java 6 is being EOLd as we speak).Thankfully the development of Java is an open process and such information is at the fingertips of everyone! A couple of more references (since we don’t say pointers in Java) related to Strings:If you are storing the same string over and over again (maybe you’re parsing messages from a socket for example), you should read up on alternatives to String.intern() (and also consider reading chapter 50 from the second edition of Effective Java: Avoid strings where other types are more appropriate) Look into (and do benchmarks before using them!) options like UseCompressedStrings (which seems to have been removed), UseStringCache and StringCacheHope I didn’t strung you along too much and you found this useful! Until next time – Attila Balazs Meta: this post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on! Want to write for the blog? We are looking for contributors to fill all 24 slot and would love to have your contribution! Contact Attila Balazs to contribute!   Reference: Changes to String.substring in Java 7 from our JCG partner Attila-Mihaly Balazs at the Java Advent Calendar blog. ...

The Big List of 256 Programming Languages

The holiday season typically brings lots of vacation time for people. Instead of sitting around and being lazy, why not take the time to learn a new programming language? I am not recommending a specific language over others at this time, but providing a long list of languages based on GitHub and TIOBE. I have not tried to categorize or validate this list of languages in any way, so please do not complain about some ancient or useless technology being listed. If you think there is a language that should be added, please leave it in a comment along with a link with information about the language, preferably on Wikipedia or the actual language site.         I give no guarantees that the links for these languages are what was meant by GitHub or TIOBE, but they do not link to an official site for the languages so I did my best in finding something.4th Dimension/4D ABAP ABC ActionScript Ada Agilent VEE Algol Alice Angelscript Apex APL AppleScript Arc Arduino ASP AspectJ Assembly ATLAS Augeas AutoHotkey AutoIt AutoLISP Automator Avenue Awk Bash (Visual) Basic bc BCPL BETA BlitzMax Boo Bourne Shell Bro C C Shell C# C++ C++/CLI C-Omega Caml Ceylon CFML cg Ch CHILL CIL CL (OS/400) Clarion Clean Clipper Clojure CLU COBOL Cobra CoffeeScript ColdFusion COMAL Common Lisp Coq cT Curl D Dart DCL DCPU-16 ASM Delphi/Object Pascal DiBOL Dylan E eC Ecl ECMAScript EGL Eiffel Elixir Emacs Lisp Erlang Etoys Euphoria EXEC F# Factor Falcon Fancy Fantom Felix Forth Fortran Fortress (Visual) FoxPro Gambas GNU Octave Go Google AppsScript Gosu Groovy Haskell haXe Heron HPL HyperTalk Icon IDL Inform Informix-4GL INTERCAL Io Ioke J J# JADE Java Java FX Script JavaScript JScript JScript.NET Julia Korn Shell Kotlin LabVIEW Ladder Logic Lasso Limbo Lingo Lisp Logo Logtalk LotusScript LPC Lua Lustre M4 MAD Magic Magik Malbolge MANTIS Maple Mathematica MATLAB Max/MSP MAXScript MEL Mercury Mirah Miva ML Monkey Modula-2 Modula-3 MOO Moto MS-DOS Batch MUMPS NATURAL Nemerle Nimrod NQC NSIS Nu NXT-G Oberon Object Rexx Objective-C Objective-J OCaml Occam ooc Opa OpenCL OpenEdge ABL OPL Oz Paradox Parrot Pascal Perl PHP Pike PILOT PL/I PL/SQL Pliant PostScript POV-Ray PowerBasic PowerScript PowerShell Processing Prolog Puppet Pure Data Python Q R Racket REALBasic REBOL Revolution REXX RPG (OS/400) Ruby Rust S S-PLUS SAS Sather Scala Scheme Scilab Scratch sed Seed7 Self Shell SIGNAL Simula Simulink Slate Smalltalk Smarty SPARK SPSS SQR Squeak Squirrel Standard ML Suneido SuperCollider TACL Tcl Tex thinBasic TOM Transact-SQL Turing TypeScript Vala/Genie VBScript Verilog VHDL VimL Visual Basic .NET WebDNA Whitespace X10 xBase XBase++ Xen XPL XSLT XQuery yacc Yorick Z shellSo, did you find one that you liked? Or did this stir up memories from long ago with languages you thought were dead and buried? Again, if there is a language you believe belongs in this list, please leave a comment and a wikipedia or official site link for the language. Related articlesOn Programming Languages (raganwald.posterous.com) Polyglot Programmer (crowdint.com) New Programming Language Makes Social Coding Easier (technologyreview.in)  Reference: The Big List of 256 Programming Languages from our JCG partner Rob Diana at the Regular Geek blog. ...

Dynamic hot-swap environment inside Java with atomic updates

One could argue that the above title can be shortened as OSGi, and I want to discard that thought process at the very beginning. No offense to OSGi, it is a great specification which got messed up at the implementation layer or at the usability layer, which is what I believe about OSGi. You could of-course do this using OSGi but with some custom work as well. The downside of using OSGi to solve this issue is the unwanted complexity that is introduced on the development process. We were inspired of the JRebel, and we thought for a moment that something on that line is what we wanted and soon realized, we do not want to go into the byte code injection on a production grade runtime. So lets analyze the problem domain. Problem Domain The problem we are trying to address is related to the UltraESB, to be specific the live updates feature. UltraESB supports atomically updating/adding new configuration fragments (referred to as “Deployment Units“) to a running ESB, without any down time and most importantly without any inconsistent states. However, one of the limitations of this feature was that if a particular Java class resides in the user class space requires a change for this configuration update of the deployment unit, it required a restart of the JVM. While this is affordable in a clustered deployment (with round-robin restart), in a single instance deployment this introduced a downtime to the whole system. We had to make sure we preserve few guarantees in solving this;While deployment unit is being updated the messages already accepted by that unit should use all resources including the loaded classes (and any new class to be loaded) of the existing unit, while any new messages (after completing the update of the unit) has to be dispatched to the new deployment unit configuration and the resource base, which we call the “Guarantee of Consistency“. To make sure this we need to manage 2 (or more for that matter) versions of the same class on the same JVM for the respective deployment units to use the classes in an atomic manner. Lets call this the “Guarantee of Atomicity” of a deployment unit. A deployment unit configuration may contain Java fragments which are compiled on-the-fly at the update, which may contain dependencies to the updated classes, having the compiler to be able to locate the new version of the class for compilation process. This is the “Guarantee of Correctness“ The process of updating has to be transparent to the users (they do not need to worry about this, neither on the development time nor on the deployment time) and the whole process should be simple. Lets call this the “Guarantee of Simplicity“Now you will understand this problem to be something more than OSGi as the compilation is something that OSGi won’t be able to solve on its own (AFAIK) at least at the time I am writing this blog. If I come back to OSGi, to make sure it is crystal clear, why we didn’t go on that path?, lets analyze the requirement in detail. What we really want is not completely a modular JVM, rather a specific space inside the JVM to be dynamic and atomically re-loadable. Mapping this to our actual use-case, it is making sure that anything that user writes and plugs into the ESB (i.e. a deployment unit containing proxy services, sequences and mediation logic) is dynamically atomically re-loadable, versionable but not the ESB as in the ESB core which executes the user code. This is what users have asked from us and not how you are going to add another feature to the ESB at runtime, without a restart. I agree it is cool to be able to add new features but no body seem to want that and the complexity associated with it. We were ready to do any sort of a complex work, but we were not ready to pass that or any variation of that complexity into our users. Proposed Solution If you want the first 2 guarantees “Consistency/Atomicity” (being able to have 2 versions of the same class loaded in runtime and using the right class in the right task out of those) in Java, you have no other-way than writing a new class loader, which forces the JVM to do Child First Class Loading. JVM standard class loaders are all Parent-First. WebAppClassLoader of a typical application container is very close to what we wanted, but with dynamic reloading at production environments. The old class space and the new class space should be managed by 2 instances of this class loader to be able to safely isolate the 2 versions. To understand the above fact it is important to understand how JVM identifies the classes. Even though from a Java language perspective classes are uniquely identified by the FQN which is the “package name + class name”, from the JVM perspective in addition to the above notion, the class loader which has loaded this class is also a fact in deciding a uniqueness of a class. In OSGi like environments, this is why you see ClassCastException even though you are casting to the correct type. So the conclusion is that we need to write a class loader and keep separate instances of that class loader for different version of different deployment units, which are re-loadable. In order to make sure that, on-the-fly compiler sees the correct classes to compile the sequence fragments, guaranteeing the “Correctness“, there needs to be a JavaFileManager implementation, again to look for the updated class space. Java compiler task, javac, is searching the dependencies to compile a class via the specified file manager, as JavaFileObject instances and not via a class loader as Class objects, this is to make sure that the compiler effectively resolves the classes as there can be dependencies among the classes being compiled. Further the user shouldn’t be asked to place jar files in a versionned file structure, to not to affect the guarantee of “Simplicity“, rather the ESB itself has to manage this jar file versionning to make sure that we do not mix different versions of the class spaces. This is also important for the correct operation of the compiler task in different versions as the compiler uses Memory Mapped files to read the class definitions over the input stream to the classes provided by the file manager forcing the maintenance of a physical copy of each and every version of the jar files/classes. Execution of the Implementation Let me first point you to the complete changeset which you can refer to time to time while reading the implementation. We have identified 3 key spaces to be implemented first of which is a class loader to provide classes of the users class space. We name it the HotSwapClassLoader (I am not going to show you the code snippets in the blog, please do not hesitate to browse the complete code, keeping in mind the terms of the AGPL license, as the code is AGPL). Now we wanted to associate this class loader for a version of the deployment unit, which is inherently supported in UltraESB as it keeps these as separate Spring sub contexts. So any new deployment unit configuration creation including a new version of an existing deployment unit will instantiate a new instance of this class loader and uses that as the resource/class loader for the deployment unit configuration. The class loader at the initialization calculates a normalized hash value of the user class space, and checks whether there is an existing copy of the class space for the current version and it uses that copy or creates a new copy depending on the above assertion. This hash and reusing the existing copy of a class space prevents the management of 2 copies of the same user class space version, as this whole process is synchronized on a static final lock. Then it operates on that copy of the user class space. This copying is a must to not to let the user worry about the class versioning and to make sure the correct set of classes are used in a given configuration. This class loader also takes extensive measures to make sure that the class space copy is cleaned at its earliest possible time. However that only guarantees an eventual cleanup. The next main item of the implementation is the InMemoryFileManager which was an existing class which got modified to support the user class space in addition to the in-memory compiled source code fragments via the list method as a Iteratable of SwappableJavaFileObject instances. The file manager first queries the HotSwapClassLoader to find the SwappableJavaFileObject instances corresponding to the user class space, and then the system class space and returns as a WrappedIterator which makes sure the user space classes gets the precedence. In the final step of the implementation, after this adjustment/customization of the core JVM features, it was just a matter of using this custom class loader to load the classes for sequences and proxy service and providing the custom file manager for the fragment compilation task of a deployment unit to complete the solution. We also wanted a switch to disable this while it is enabled by default and recommended to have in the production deployment. Facilitating that and few other customizations of the runtime environment, a concept of Environment has been introduced to UltraESB, where the concept has been borrowed from the Grails environments feature. It concluded with a successful implementation of a dynamic runtime, which is “Consistent”, “Atomic”, “Correct” and most importantly “Simple to the users”. Operational Behavior Now that we have the solution implemented, lets look at few UltraESB internals on how this operates at a production deployment. Any deployment unit configuration at the production environment will be updated upon issuing the configuration add or update administration command. This command can be issued either via raw JMX or via any administration tool implemented on top of the JMX operations, such as UTerm, or via UConsole. After this implementation, it doesn’t change anything to the way you do updates, it further enhances with the ability to add/replace jar files with modifications effecting the update into the lib/custom user class space of the UltraESB, which makes sure to pick the updated jar files/classes for the new configuration, upon issuing the per said administrative command after the update. You may try this on the nightly builds of UltraESB or even wait for the 2.0.0 release which is scheduled to be out with lot more new cool yet usable features in the mid January 2013.   Reference: Dynamic hot-swap environment inside Java with atomic updates from our JCG partner Ruwan Linton at the Blind Vision – of Software Engineering and Life blog. ...

Devoxx 2012: Java 8 Lambda and Parallelism, Part 1

Overview Devoxx, the biggest vendor-independent Java conference in the world, took place in Atwerp, Belgium on 12 – 16 November. This year it was bigger yet, reaching 3400 attendees from 40 different countries. As last year, I and a small group of colleagues from SAP were there and enjoyed it a lot. After the impressive dance of Nao robots and the opening keynotes, more than 200 conference sessions explored a variety of different technology areas, ranging from Java SE to methodology and robotics. One of the most interesting topics for me was the evolution of the Java language and platform in JDK 8.   My interest was driven partly by the fact that I was already starting work on Wordcounter, and finishing work on another concurrent Java library named Evictor, about which I will be blogging in a future post.In this blog series, I would like to share somewhat more detailed summaries of the sessions on this topic which I attended. These three sessions all took place in the same day, in the same room, one after the other, and together provided three different perspectives on lambdas, parallel collections, and parallelism in general in Java 8.On the road to JDK 8: Lambda, parallel libraries, and more by Joe Darcy Closures and Collections – the World After Eight by Maurice Naftalin Fork / Join, lambda & parallel() : parallel computing made (too ?) easy by Jose PaumardIn this post, I will cover the first session, with the other two coming soon. On the road to JDK 8: Lambda, parallel libraries, and more In the first session, Joe Darcy, a lead engineer of several projects at Oracle, introduced the key changes to the language coming in JDK 8, such as lambda expressions and default methods, summarized the implementation approach, and examined the parallel libraries and their new programming model. The slides from this session are available here. Evolving the Java platform Joe started by talking a bit about the context and concerns related to evolving the language. The general evolution policy for OpenJDK is:Don’t break binary compatibility Avoid introducing source incompatibilities. Manage behavioral compatibility changesThe above list also extends to the language evolution. These rules mean that old classfiles will be always recognized, the cases when currently legal code stops compiling are limited, and changes in the generated code that introduce behavioral changes are also avoided. The goals of this policy are to keep existing binaries linking and running, and to keep existing sources compiling. This has also influenced the sets of features chosen to be implemented in the language itself, as well as how they were implemented. Such concerns were also in effect when adding closures to Java. Interfaces, for example, are a double-edged sword. With the language features that we have today, they cannot evolve compatibly over time. However, in reality APIs age, as people’s expectations how to use them evolve. Adding closures to the language results in a really different programming model, which implies it would be really helpful if interfaces could be evolved compatibly. This resulted in a change affecting both the language and the VM, known as default methods. Project Lambda Project Lambda introduces a coordinated language, library, and VM change. In the language, there are lambda expressions and default methods. In the libraries, there are bulk operations on collections and additional support for parallelism. In the VM, besides the default methods, there are also enhancements to the invokedynamic functionality. This is the biggest change to the language ever done, bigger than other significant changes such as generics. What is a lambda expression? A lambda expression is an anonymous method having an argument list, a return type, and a body, and able to refer to values from the enclosing scope: (Object o) -> o.toString() (Person p) -> p.getName().equals(name) Besides lambda expressions, there is also the method reference syntax: Object::toString() The main benefit of lambdas is that it allows the programmer to treat code as data, store it in variables and pass it to methods. Some history When Java was first introduced in 1995 not many languages had closures, but they are present in pretty much every major language today, even C++. For Java, it has been a long and winding road to get support for closures, until Project Lambda finally started in Dec 2009. The current status is that JSR 335 is in early draft review, there are binary builds available, and it’s expected to become very soon part of the mainline JDK 8 builds. Internal and external iteration There are two ways to do iteration – internal and external. In external iteration you bring the data to the code, whereas in internal iteration you bring the code to the data. External iteration is what we have today, for example: for (Shape s : shapes) { if (s.getColor() == RED) s.setColor(BLUE); }There are several limitations with this approach. One of them is that the above loop is inherently sequential, even though there is no fundamental reason it couldn’t be executed by multiple threads. Re-written to use internal iteration with lambda, the above code would be: shapes.forEach(s -> { if (s.getColor() == RED) s.setColor(BLUE); }) This is not just a syntactic change, since now the library is in control of how the iteration happens. Written in this way, the code expresses much more what and less how, the how being left to the library. The library authors are free to use parallelism, out-of-order execution, laziness, and all kinds of other techniques. This allows the library to abstract over behavior, which is a fundamentally more powerful way of doing things. Functional Interfaces Project Lambda avoided adding new types, instead reusing existing coding practices. Java programmers are familiar with and have long used interfaces with one method, such as Runnable, Comparator, or ActionListener. Such interfaces are now called functional interfaces. There will be also new functional interfaces, such as Predicate and Block. A lambda expression evaluates to an instance of a functional interface, for example: PredicateisEmpty = s -> s.isEmpty(); Predicate isEmpty = String::isEmpty; Runnable r = () -> { System.out.println(“Boo!”) };So existing libraries are forward-compatible with lambdas, which results in an “automatic upgrade”, maintaining the significant investment in those libraries. Default Methods The above example used a new method on Collection, forEach. However, adding a method to an existing interface is a no-go in Java, as it would result in a runtime exception when a client calls the new method on an old class in which it is not implemented. A default method is an interface method that has an implementation, which is woven-in by the VM at link time. In a sense, this is multiple inheritance, but there’s no reason to panic, since this is multiple inheritance of behavior, not state. The syntax looks like this: interface Collection<T> { ... default void forEach(Block<T> action) { for (T t : this) action.apply(t); } }There are certain inheritance rules to resolve conflicts between multiple supertypes:Rule 1 – prefer superclass methods to interface methods (“Class wins”) Rule 2 – prefer more specific interfaces to less (“Subtype wins”) Rule 3 – otherwise, act as if the method is abstract. In the case of conflicting defaults, the concrete class must provide an implementation.In summary, conflicts are resolved by looking for a unique, most specific default-providing interface. With these rules, “diamonds” are not a problem. In the worst case, when there isn’t a unique most specific implementation of the method, the subclass must provide one, or there will be a compiler error. If this implementation needs to call to one of the inherited implementations, the new syntax for this is A.super.m(). The primary goal of default methods is API evolution, but they are useful as an inheritance mechanism on their own as well. One other way to benefit from them is optional methods. For example, most implementations of Iterator don’t provide a useful remove(), so it can be declared “optional” as follows: interface Iterator<T> { ... default void remove() { throw new UnsupportedOperationException(); } }Bulk operations on collections Bulk operations on collections also enable a map / reduce style of programming. For example, the above code could be further decomposed by getting a stream from the shapes collection, filtering the red elements, and then iterating only over the filtered elements: shapes.stream().filter(s -> s.getColor() == RED).forEach(s -> { s.setColor(BLUE); }); The above code corresponds even more closely to the problem statement of what you actually want to get done. There also other useful bulk operations such as map, into, or sum. The main advantages of this programming model are:More composability Clarity – each stage does one thing The library can use parallelism, out-of-order, laziness for performance, etc.The stream is the basic new abstraction being added to the platform. It encapsulates laziness as a better alternative to “lazy” collections such as LazyList. It is a facility that allows getting a sequence of elements out of it, its source being a collection, array, or a function. The basic programming model with streams is that of a pipeline, such as collection-filter-map-sum or array-map-sorted-forEach. Since streams are lazy, they only compute as elements are needed, which pays off big in cases like filter-map-findFirst. Another advantage of streams is that they allow to take advantage of fork/join parallelism, by having libraries use fork/join behind the scenes to ease programming and avoid boilerplate. Implementation technique In the last part of his talk, Joe described the advantages and disadvantages of the possible implementation techniques for lambda expressions. Different options such as inner classes and method handles were considered, but not accepted due to their shortcomings. The best solution would involve adding a level of indirection, by letting the compiler emit a declarative recipe, rather than imperative code, for creating a lambda, and then letting the runtime execute that recipe however it deems fit (and make sure it’s fast). This sounded like a job for invokedynamic, a new invocation mode introduced with Java SE 7 for an entirely different reason – support for dynamic languages on the JVM. It turned out this feature is not just for dynamic languages any more, as it provides a suitable implementation mechanism for lambdas, and is also much better in terms of performance. Conclusion Project Lambda is a large, coordinated update across the Java language and platform. It enables much more powerful programming model for collections and takes advantage of new features in the VM. You can evaluate these new features by downloading the JDK8 build with lambda support. IDE support is also already available in NetBeans builds with Lambda support and IntelliJ IDEA 12 EAP builds with Lambda support. I already made my own experiences with lambdas in Java in Wordcounter. As I already wrote, I am convinced that this style of programming will quickly become pervasive in Java, so if you don’t yet have experience with it, I do encourage you to try it out.   Reference: Devoxx 2012: Java 8 Lambda and Parallelism, Part 1 from our JCG partner Stoyan Rachev at the Stoyan Rachev’s Blog blog. ...

Composing Java annotations

The allowed attribute types of a Java annotations are deliberately very restrictive, however some clean composite annotation types are possible with the allowed types.                     Consider a sample annotation from the tutorial site: package annotation; @interface ClassPreamble { String author(); String[] reviewers(); } Here the author and reviewers are of String and array types which is in keeping with the allowed types of annotation attributes. The following is a comprehensive list of allowed types(as of Java 7):String Class any parameterized invocation of Class an enum type an annotation type, do note that cycles are not allowed, the annotated type cannot refer to itself an array type whose element type is one of the preceding types.Now, to make a richer ClassPreable consider two more annotation types defined this way: package annotation;public @interface Author { String first() default ''; String last() default ''; }package annotation;public @interface Reviewer { String first() default ''; String last() default ''; } With these, the ClassPreamble can be composed from the richer Author and Reviewer annotation types, this way: package annotation; @interface ClassPreamble { Author author(); Reviewer[] reviewers(); } Now an annotation applied on a class looks like this: package annotation;@ClassPreamble(author = @Author(first = 'John', last = 'Doe') , reviewers = {@Reviewer(first = 'first1', last = 'last1'), @Reviewer(last = 'last2') } ) public class MyClass { .... } This is a contrived example just to demonstrate composition of annotations, however this approach is used extensively for real world annotations, for eg, to define a many to many relationship between two JPA entities: @ManyToMany @JoinTable(name='Employee_Project', joinColumns=@JoinColumn(name='Employee_ID'), inverseJoinColumns=@JoinColumn(name='Project_ID')) private Collection<Project> projects;   Reference: Composing Java annotations from our JCG partner Biju Kunjummen at the all and sundry blog. ...

Death by Redirect

It is said that the greatest harm can come from the best intentions. We recently had a case where, because of the best intentions, two @#@&*@!!^@ parties killed our servers with a single request, causing a deadlock involving all of our Tomcat instances, including all HTTP threads. Naturally, not a pleasant situation to find yourself in.   Some explanation about our setup is necessary here. We have a number of Tomcat instances that serve HTML pages for our website, located behind a stateless Load Balancer. The time then came when we added a second application, deployed on Jetty.     Since we needed the new app to be served as part of the same website (e.g. http://www.wix.com/jetty-app), we proxied the second (Jetty) application from Tomcat (Don’t dig into why we proxied Jetty from Tomcat, we thought at the time we had good reasons for it).So in fact we had the following architecture:At the Tomcat end, we were using the Apache HttpClient library to connect to the Jetty application. HttpClient by default is configured to follow redirects. Best Intentions #1: Why should we require the developer to think about redirects? Let’s handle them automatically for her… At the Jetty end, we had a generic error handler that on an error, instead of showing an error page, redirected the user to the homepage of the app on Jetty. Best Intentions #2: Why show the user an error page? Let’s redirect him to our homepage… But what happens when the homepage of the Jetty application generates an error? Well, apparently it returns a redirect directive to itself! Now, if a browser would have gotten that redirect, it would have entered a redirect loop and break it after about 20 redirects. We would have seen 20 requests all resulting in a redirect, probably seen a traffic spike, but nothing else. However, because we had redirects turned on at the HttpClient library, what happened is the following:A Request arrives to our Tomcat server, which resolves it to be proxied to the Jetty application Tomcat Thread #1 proxies a request to Jetty Jetty has an exception and returns a redirect to http://www.wix.com/jetty -app Tomcat Thread #1 connects to the www.wix.com host, which goes via the load balancer and ends at another Tomcat thread – Tomcat Thread #2 Tomcat Thread #2 proxies a request to Jetty Jetty has an exception and returns a redirect to http://www.wix.com/jetty Tomcat Thread #1 connects to the www.wix.com host, which goes via the load balancer and ends at another Tomcat thread – Tomcat Thread #3 And so on, until all threads on all Tomcats are all stuck on the same one requestSo, what can we learn from this incident? We can learn that the defaults of Apache HttpClient are not necessarily the ones you’d expect. We can learn that if you issue a redirect, make sure you are not redirecting to yourself (like our Jetty application homepage). We can learn that the HTTP protocol, which is considered a commodity can be complicated at times and hard to tune, and that not every developer knows to perform an HTTP request. We can also learn that when you take on a 3rd party library, you should invest time in learning to use it, to understand the failure points and how to overcome them. However, there is a deeper message here. When we develop software, we trade development velocity and risk. The faster we want to develop software, the more we need to trust the individual developers. The more trust we give developers, the more risk we gain by developer black spots – things a developer fails to think about, e.g. handling redirect. As a software engineer, I am not sure there is a simple solution to this issue – I guess it is up to you.   Reference: Death by Redirect from our JCG partner Yoav Abrahami at the Wix IO blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: