Featured FREE Whitepapers

What's New Here?

java-logo

Java Minor Releases Scheme Tweaked Again

In 2013, Oracle announced the Java SE – Change in Version Numbering Scheme. The announcement stated that Limited Update releases (those “that include new functionality and non-security fixes”) and Critical Patch Updates (CPUs) [those "that only include fixes for security vulnerabilities"] would be released with specific version number schemes. In particular, Limited Use Releases would have version numbers with multiples of 20 while Critical Patch Updates would have version numbers that are multiples of 5 and come after the latest Limited Use Release version number. The purpose of this scheme change was to allow room for versions with numbers between these, which allows Oracle “to insert releases – for example security alerts or support releases, should that become necessary – without having to renumber later releases.” Yesterday’s announcement (“Java CPU and PSU Releases Explained“) states, “Starting with the release of Java SE 7 Update 71 (Java SE 7u71) in October 2014, Oracle will release a Critical Patch Update (CPU) at the same time as a corresponding Patch Set Update (PSU) for Java SE 7.” This announcement explains the difference between a CPU and a PSU:Critical Patch Update CPU “Fixes to security vulnerabilities and critical bug fixes.” Minimum recommended for everyone.Patch Set Update PSU “All fixes in the corresponding CPU” and “additional non-critical fixes.” Recommended only for those needing bugs fixed by PSU additional fixes.Yesterday’s announcement states that PSU releases (which are really CPU+ releases) will be released along with their corresponding CPU releases. Because the additional fixes that a PSU release contains beyond what’s in the CPU release are expected to be part of the next CPU release, developers are encouraged to experiment with PSU releases to ensure that coming CPU features work well for them.Reference: Java Minor Releases Scheme Tweaked Again from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
jboss-hibernate-logo

How to use Hibernate to generate a DDL script from your Play! Framework project

Ok, so you have been using the hiber­nate prop­erty name=“hibernate.hbm2ddl.auto” value=“update” to con­tin­u­ously update your data­base schema, but now you need a com­plete DDL script? Use this method from you Global Class onStart to export the DDL scripts.  Just give it the pack­age name (with path) of your Enti­ties as well as a file name: public void onStart(Application app) { exportDatabaseSchema("models", "create_tables.sql"); }public void exportDatabaseSchema(String packageName, String scriptFilename) {final Configuration configuration = new Configuration(); final Reflections reflections = new Reflections(packageName); final Set<Class<?>> classes = reflections.getTypesAnnotatedWith(Entity.class); // iterate all Entity classes in the package indicated by the name for (final Class<?> clazz : classes) { configuration.addAnnotatedClass(clazz); } configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.PostgreSQL9Dialect");SchemaExport schema = new SchemaExport(configuration); schema.setOutputFile(scriptFilename); schema.setDelimiter(";"); schema.execute(Target.SCRIPT, SchemaExport.Type.CREATE ); // just export the create statements in the script } That is it! Thanks to @MonCalamari for answer­ing my Ques­tion on Stack­over­flow here.Reference: How to use Hibernate to generate a DDL script from your Play! Framework project from our JCG partner Brian Porter at the Poornerd blog....
eclipse-logo

Eclipse Extension Point Evaluation Made Easy

Coding Eclipse Extension Point evaluations comes in a bit verbose and sparsely self-explaining. As I got round to busy myself with this topic recently, I wrote a little helper with the intent to reduce boilerplate code for common programming steps, while increasing development guidance and readability at the same time. It turned out to be not that easy to find an expressive solution, which matches all the use cases I could extract from current projects. So I thought it might be a good idea to share my findings and see what other people think of it.       Eclipse Extension Point Evaluation Consider a simple extension point definition that supports an unbounded contribution of extensions. Each of these contributions should provide a Runnable implementation to perform some sort of operation:An usual evaluation task could be to retrieve all contributions, create the executable extensions and invoke each of those: public class ContributionEvaluation { private static final String EP_ID = "com.codeaffine.post.contribution";public void evaluate() { IExtensionRegistry registry = Platform.getExtensionRegistry(); IConfigurationElement[] elements = registry.getConfigurationElementsFor( EP_ID ); Collection<Runnable> contributions = new ArrayList<Runnable>(); for( IConfigurationElement element : elements ) { Object extension; try { extension = element.createExecutableExtension( "class" ); } catch( CoreException e ) { throw new RuntimeException( e ); } contributions.add( ( Runnable )extension ); } for( Runnable runnable : contributions ) { runnable.run(); } } } While evaluate could be split into smaller methods to clarify its responsibilities, the class would also be filled with more glue code. As I find such sections hard to read and awkward to write I was pondering about a fluent interface approach that should guide a developer through the various implementation steps. Combined with Java 8 lambda expressions I was able to create an auxiliary that boils down the evaluate functionality to: public void evaluate() { new RegistryAdapter() .createExecutableExtensions( EP_ID, Runnable.class ) .withConfiguration( ( runnable, extension ) -> runnable.run() ) .process(); } Admittedly I cheated a bit, since it is possible to improve the first example also a little by using the java 8 Collection#forEach feature instead of looping explicitly. But I think this still would not make the code really great! For general information on how to extend Eclipse using the extension point mechanism you might refer to the Plug-in Development Environment Guide of the online documentation. RegistryAdapter The main class of the helper implementation is the RegistryAdapter, which encapsulates the system’s IExtensionRegistry instance and provides a set of methods to define what operations should be performed with respect to a particular extension point. At the moment the adapter allows to read contribution configurations or to create executable extensions. Multiple contributions are evaluated as shown above using methods that are denoted in plural – to evaluate exactly one contribution element, methods denoted in singular are appropriate. This means to operate on a particular runnable contribution you would use createExecutableExtension instead of createExecutableExtensions. Depending on which operation is selected different configuration options are available. This is made possible as the fluent API implements a kind of grammar to improve guidance and programming safety. For example the readExtension operation does not allow to register a ExecutableExtensionConfigurator, since this would be an inept combination. The method withConfiguration allows to configure or initialize each executable extension after its creation. But as shown in the example above it can also be used to invoke the runnable extension directly. Due to the type safe implementation of createExecutableExtension(s) it is possible to access the extension instance within the lambda expression without cast. Finally the method process() executes the specified operation and returns a typed Collection of the created elements in case they are needed for further processing: Collection<Extension> extensions = new RegistryAdapter().readExtensions( EP_ID ).process(); Predicate But how is is possible to select a single eclipse extension point contribution element with the adapter? Assume that we add an attribute id to our contribution definition above. The fluent API of RegistryAdapter allows to specify a Predicate that can be used to select a particular contribution: public void evaluate() { new RegistryAdapter() .createExecutableExtension( EP_ID, Runnable.class ) .withConfiguration( ( runnable, extension ) -> runnable.run() ) .thatMatches( attribute( "id", "myContribution" ) ) .process(); } There is a utility class Predicates that provides a set of predefined implementations to ease common use cases like attribute selection. The code above is a shortcut using static imports for: .thatMatches( Predicates.attribute( "id", "myContribution" ) ) where “myContribution” stands for the unique id value declared in the extension contribution: <extension point="com.codeaffine.post.contribution"> <contribution id="myContribution" class="com.codeaffine.post.MyContribution"> </contribution> </extension>Of course it is possible to implement custom predicates in case the presets are not sufficient: public void evaluate() { Collection<Extension> extensions = new RegistryAdapter() .readExtensions( EP_ID, Description.class ) .thatMatches( (extension) -> extension.getValue() != null ) .process(); } Extension Usually Eclipse Extension Point evaluation operates most of the time on IConfigurationElement. The adapter API is unsharp in distinguishing between extension point and configuration element and provides a simple encapsulation called Extension. But for more sophisticated tasks Extension instances make the underlying configuration element accessible. In general Extension provides accessors to the attribute values, contribution names, contribution values, nested contributions and allows to create an executable extension. One of the major reasons to introduce this abstraction was to have an API that converts checked CoreException implicitly to runtime exceptions as I am accustomed to work with the Fail Fast approach without bulky checked exeption handling. Exception Handling However in case that the eclipse extension evaluation is invoked at startup time of a plug-in or gets executed in background, Fail Fast is not an option. And it is surely not reasonable to ignore remaining contributions after a particular contribution has caused a problem. Because of this the adapter API allows to replace the Fail Fast mechanism with explicit exception handling: public void evaluate() { Collection<Runnable> contributions = new RegistryAdapter() .createExecutableExtensions( EP_ID, Runnable.class ) .withExceptionHandler( (cause) -> handle( cause ) ) .process(); [...] } private void handle( CoreException cause ) { // do what you gotta do } Note that the returned collection of contributions contains of course only those elements that did not run into any trouble. Where to get it? For those who want to check it out, there is a P2 repository that contains the feature com.codeaffine.eclipse.core.runtime providing the RegistryAdapter and its accompanying classes. The repository is located at:http://fappel.github.io/xiliary/and the source code and issue tracker is hosted at:https://github.com/fappel/xiliaryAlthough documentation is missing completely at this moment, it should be quite easy to get started with the given explanations of this post. But please keep in mind that the little tool is in a very early state and probably will undergo some API changes. In particular dealing only with CoreExceptions while looping over the contributions still is a bit too weak. Conclusion The sections above introduced the basic functionality of the RegistyAdapter and focused on how it eases Eclipse extension point evaluation. I replaced old implementations in my current projects with the adapter and did not run into any trouble, which means that the solution looks promising to me so far… But there is still more than meets the eye. With this little helper in place, combined with an additional custom assert type, writing integration tests for an extension point’s evaluation functionality really gets a piece of cake. That topic is however out of scope for this post and will be covered next time. So stay tuned and do not forget to share the knowledge, in case you find the approach described above useful – thanks!Reference: Eclipse Extension Point Evaluation Made Easy from our JCG partner Rudiger Herrmann at the Code Affine blog....
software-development-2-logo

Microservices

A microservice is a small, focussed piece of software that can be developed, deployed and upgraded independently. Commonly, it exposes it functionality via a synchronous protocol such as HTTP/REST. That is my understanding of microservices, at least. There is no hard definition of what they are, but they currently seem to be the cool kid on the block, attracting increasing attention and becoming a mainstream approach to avoiding the problem with monolithic architectures. Like any architectural solution, they are not without their downsides too, such as increased deployment and monitoring complexity. This post will have a look at some of the common characteristics of microservices and contrast them with monolithic architectures. Definition and Characteristics Let’s start with some definitions from folks wiser than I: The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. – Microservices by Martin Fowler and and James Lewis [1] Functionally decompose an application into a set of collaborating services, each with a set of narrow, related functions, developed and deployed independently, with its own database. – Microservices Architecture by Chris Richardson [2] Microservices are a style of software architecture that involves delivering systems as a set of very small, granular, independent collaborating services. – Microservices – Not A Free Lunch by Benjamin Wootton [6] Some if this may not sound new. Since way back in 1984, the Unix Philosophy [8] has advocated writing programs that do one thing well, working together with other programs through standard interfaces. So perhaps more useful than definitions are some common characteristics of a microservice:Single purposeEach service should be focussed, doing one thing well. Cliff Moon [4] defined a microservice as “any isolated network service that will only perform operations on a single type of resource”, and gives the example of a user microservice that can perform operations such as new signups, password resets, etc.Loosely coupledA microservice should be able to operate without relying on other services. That is not to say that microservices cannot communicate with other microservices, it’s just that should just be the exception rather than the rule. A microservice should be, where possible, self-sufficient.Independently deployableWith monoliths, a change to any single piece of the application requires the entire app to be deployed. With microservices, each one should be deployable by itself, independently of any other services or apps. This can provide great flexibility, or ‘agility’. Ideally this should be done in a fully automated way; you’ll want a solid Continuous Integration pipeline, and devops culture, behind you. As discussed below in the disadvantages section, the one caveat here is when you are changing your interfaces.SmallThe ‘micro’ in microservice isn’t too important. 10 to 1000 lines of code might be a reasonable ball park but a much better definition might be ‘Small enough to fit in your head’ [3], that is, the project should be small enough to be easily understood by one developer. Another might be ‘Small enough to throw away’, or rewrite over maintain [3]; At one end of the scale, a single developer could create a microservice in a day. At the other end, Fowler [1] suggests that “The largest follow Amazon’s notion of the Two Pizza Team (i.e. the whole team can be fed by two pizzas), meaning no more than a dozen people”. The main point is that size is not the most important characteristic of a microservice – a single, focused purpose is. However, perhaps the best way to understand microservices is to consider an alternative, contrasting architectural style: the monolith. The Monolithic Alternative A monolithic application is one that is built and deployed as a single artifact (e.g. war file). In many ways this is the opposite of the microservice architecture. Applications often start out life as a monolith, and for good reason. Monoliths are:Easy to setup and develop – single project in an IDE Easy to deploy – a single war file Can be scaled horizontally by adding more servers, typically behind a load balancerIn fact it is probably advisable to start your applications as a monolith . Keep things simple until you have a good reason for changes (avoiding YAGNI architectural decisions). That being said, as monoliths grow, you may well start running into problems… Problems with MonolithsCodebase can be difficult to setup, and understandA large monolithic app can overload your IDE, be slow to build (and hence run tests), and it can be difficult to understand the whole application. This can have a downward spiral on software quality.Forced team dependenciesTeams are forced to coordinate (e.g. on technology choices, release cycles, shared resources etc), even if what they are working on has little, if anything, in common. For example, two teams working on separate functionality within the same monolith may be forced to use the same versions of libraries. Team A need to use Spring 3 for legacy code reasons. Team B want to use Spring 4. With both Spring3 and Spring4 in your list of dependencies, which one actually gets used? In java-world it is surprisingly easy to run into these conflicts.How do you split up teams when using a monolithic architecture?Often teams are split by technology e.g. UI teams, server-side logic teams, and database teams. Even simple changes can require a lot of different teams. It may often be easier to hack the required functionality into the area your own team is responsible for, rather than deal with cross team coordination, even if it was better placed elsewhere –  Conway’s Law [9] in action. This is especially true the larger, more dispersed and more bureaucratic a team is.Obstacle to frequent deploymentsWhen you deploy your entire codebase in one go, each deployment becomes a much bigger, likely organizational wide, deal. Deployment cycles becomes slower and less frequently which makes each deployment more risky.A long-term commitment to a technology stackWhether you like it or not! Would you like to start using Ruby in your project? If the whole app is written in Java, then you will probably be forced to rewrite it all! A daunting, and unlikely, possibility. This all or nothing type setup is closely tied to the next point of share libraries… Why use Microservices? In contrast to monolithic applications, the microservice approach is to focus on a specific area of business functionality, not technology. Such services are often developed by teams that are cross-functional. This is perhaps one of the reasons why so many job descriptions these days say ‘full stack’. So, what are the advantages of using microservices? Many of the advantages of microservices relate to the problems mentioned for monolithic architectures above and include:Being smaller and focussed means microservices are easier to understand for developers, and faster to build, deploy and startup Independently deployableEach can be deployed without impacting other services (with interface changes being a notable exception)Independently scalableEasy to add more instances the services that are experiencing heaviest loadIndependent technology stackEach microservice can use a completely independent technology stack allowing easier migrate your technology stack. I think it is worth pointing out here that just because you can use a different technology for each microservice doesn’t mean you should! Increasingly heterogeneous stacks bring increasing complexity. Exercise caution and be driven by business needs.Improved resiliency;If one service goes down (e.g. a memory leak), the rest of the app should continue to run unaffected. DisadvantagesDistributed applications are always more complicated! While monoliths typically use in-memory calls, microservices typically require inter-process calls, often to different boxes but in the same data center. As well as being more expensive, the APIs associated with remote calls are usually coarser-grained, which can be more awkward to use. Refactoring code in a monolith is easy. Doing it in a microservice can be much more difficult e.g. moving code between microservices Although microservices allow you to independently release, that is not so straightforward when you are changing the interfaces – requires coordination across the clients and the service that is change. That being said, some ways to mitigate this are:Use flexible, forgiving, broad interfaces Be as tolerant as possible when reading data from a service. Use design patterns like the Tolerant Reader be conservative in what you do, be liberal in what you accept from others — Jon PostelWhere things can start to get hard with microservices is at an operations level. Runtime management and monitoring of microservices in particular can be problematic. A good ops/devops team is necessary, particularly when you are deploying large numbers of microservices at scale. Where as detecting problems in a single monolithic application can be dealt with by attached a monitor to the single process, doing the same when you have dozens of processes interacting is much more difficult. Microservices vs SOA SOA, or Service Oriented Architecture, is an architectural design pattern that seems to have somewhat fallen out of favor. SOA also involved a collection of services, so what are the difference between SOA and microservices? It is a difficult question to answer, but Fowler here used the term ‘SOA done right’, which I like. Adrian Cockroft [15] described Microservice as being like SOA but with a bounded context. Wikipedia distinguishes the two by saying that SOA aims at integrating various (business) applications whereas several microservices belong to one application only [14]. A related aspect is that many SOAs use ESB (Enterprise Service Buses), where as microservices tend to smart endpoints, dumb pipes [1]. Finally, although neither microservices and SOAs are tied to any one protocol or data format, SOAs did seem to frequently involve Simple Object Access Protocol (SOAP)-based Web services, using XML and WSDL etc, whereas microservices seem to commonly favour REST and JSON. Who is using Microservices? Most large scale web sites including Netflix, Amazon and eBay have evolved from a monolithic architecture to a microservices architecture. Amazon was on of the pioneers of using microservices. Between 100-150 services are accessed to build a single page [10]. If for example, the recommendation service is down, default recommendations can be used. These may be less effective at tempting you to buy, but is a better alternative to errors or no recommendations at all. Netflix are also pioneers in the microservice world, not only using microservices extensively, but also releasing many useful tools back into the open source world, including Chaos Monkey for testing web application resiliency and Janitor Monkey for cleaning up unused resources. See more at netflix.github.io. TicketMaster, the ticket sales and distribution company, is also making increasing use of microservices to give them “Boardroom agility or the process of quickly reacting to the marketplace.” [12] Best practices Some best practices for microservices might be:Separate codebasesEach microservice has its own repository and CI build setupSeparate interface and implementationSeparate the API and implementation modules, using a Maven multi-module project or similar. For example, clients should depend on CustomerDao rather than CustomerDaoImpl, or JpaCustomerDao.Use monitoring!For example AppDynamics and New RelicHave health checks built into your services Have standard templates availableIf many developers are creating microservices, have a template they can use that gets them up and running quickly and implements corporate standards for logging and the aforementioned monitoring and health checks.Support multiple versionsLeave multiple old microservice versions running. Fast introduction vs. slow retirement asymmetry. [11] Summary As an architectural approach, and particularly as an alternative to monolithic architectures, microservices are an attractive choice. They allow independent technology stacks to be used, with each service being independently built and deployed, meaning you are much more likely to be able to follow the ‘deploy early, deploy often’ mantra. That being said, they do bring their own complexities, including deployment and monitoring. It is advisable to start with the relative simplicity of a monolithic approach and only consider microservices when you start running into problems. Even then, migrating slowly to microservices is likely a sensible approach. For example, introducing new areas of functionality as microservices, and slowly migrating old as they need updates and rewrites anyway. And all the while, bear in mind that while each microservice itself may be simple, some of the complexity is simply moved up a level. The coordination of dozens or even hundreds of microservices brings many new challenges including build, deployment and monitoring and shouldn’t be undertaken without a solid Continuos Delivery infrastructure in place, and a good devops mentality within in the team. Cross functional and multidisciplinary teams using automation are automation are essential. Used judiciously and with the right infrastructure in place, microservices seem to be thriving. I like Martin Fowlers’s guarded optimism: “We write with cautious optimism that microservices can be a worthwhile road to tread”. [12] References and reading materials:Microservices by Martin Fowler and James Lewis Microservices Architecture by Chris Richardson Micro services – Java, the Unix Way by James Lewis Microservices, or How I Learned To Stop Making Monoliths and Love Conway’s Law by Cliff Moon Micro service architecure by Fred George Microservices are not a free lunch by Benjamin Wootton Antifragility and Microservices by Russ Miles The Unix Philosophy Conway’s Law Amazon Architecture Migrating to microservices by Adrian Cockroft Microservices with Spring Boot  Microservices for the Grumpy Neckbeard Microservices definition on Wikipedia Microservices and DevOps by Adrian CockcroftReference: Microservices from our JCG partner Shaun Abram at the Shaun Abram’s blog blog....
ceylon_logo

Typesafe APIs for the browser

A new feature in Ceylon 1.1, that I’ve not blogged about before, is dynamic interfaces. This was something that Enrique and I worked on together with Corbin Uselton, one of our GSoC students. Ordinarily, when we interact with JavaScript objects, we do it from within a dynamic block, where Ceylon’s usual scrupulous typechecking is suppressed. The problem with this approach is that if it’s an API I use regularly, my IDE can’t help me get remember the names and signatures of all the operations of the API. Dynamic interfaces make it possible to ascribe static types to an untyped JavaScript API. For example, we could write a dynamic interface for the HTML 5 CanvasRenderingContext2D like this: dynamic CanvasRenderingContext2D { shared formal variable String|CanvasGradient|CanvasPattern fillStyle; shared formal variable String font;shared formal void beginPath(); shared formal void closePath();shared formal void moveTo(Integer x, Integer y); shared formal void lineTo(Integer x, Integer y);shared formal void fill(); shared formal void stroke();shared formal void fillText(String text, Integer x, Integer y, Integer maxWidth=-1);shared formal void arc(Integer x, Integer y, Integer radius, Float startAngle, Float endAngle, Boolean anticlockwise); shared formal void arcTo(Integer x1, Integer y1, Integer x2, Float y2, Integer radius);shared formal void bezierCurveTo(Integer cp1x, Integer cp1y, Integer cp2x, Float cp2y, Integer x, Integer y);shared formal void strokeRect(Integer x, Integer y, Integer width, Integer height); shared formal void fillRect(Integer x, Integer y, Integer width, Integer height); shared formal void clearRect(Integer x, Integer y, Integer width, Integer height);shared formal CanvasGradient createLinearGradient(Integer x0, Integer y0, Integer x1, Integer y1); shared formal CanvasGradient createRadialGradient(Integer x0, Integer y0, Integer r0, Integer x1, Integer y1, Integer r1); shared formal CanvasPattern createPattern(dynamic image, String repetition);//TODO: more operations!! }dynamic CanvasGradient { shared formal void addColorStop(Integer offset, String color); }dynamic CanvasPattern { //todo } Now, if we assign an instance of JavaScript’s CanvasRenderingContext2D to this interface type, we won’t need to be inside a dynamic block when we call its methods. You can try it out in your own browser by clicking the “TRY ONLINE” button! CanvasRenderingContext2D ctx; dynamic { //get the CanvasRenderingContext2D from the //canvas element using dynamically typed code ctx = ... ; }//typesafe code, checked at compile time ctx.fillStyle = "navy"; ctx.fillRect(50, 50, 235, 60); ctx.beginPath(); ctx.moveTo(100,50); ctx.lineTo(60,5); ctx.lineTo(75,75); ctx.fill(); ctx.fillStyle = "orange"; ctx.font = "40px PT Sans"; ctx.fillText("Hello world!", 60, 95); Notice that we don’t need to ascribe an explicit type to every operation of the interface. We can leave some methods, or even just some parameters of a method untyped, by declaring them dynamic. Such operations may only be called from within a dynamic block, however. A word of caution: dynamic interfaces are a convenient fiction. They can help make it easier to work with an API in your IDE, but at runtime there is nothing Ceylon can do to ensure that the object you assign to the dynamic interface type actually implements the operations you’ve ascribed to it.Reference: Typesafe APIs for the browser from our JCG partner Gavin King at the Ceylon Team blog blog....
rabbitmq-logo

Spring @Configuration – RabbitMQ connectivity

I have been playing around with converting an application that I have to use Spring @Configuration mechanism to configure connectivity to RabbitMQ – originally I had the configuration described using an xml bean definition file. So this was my original configuration:             <beans ...;><context:property-placeholder/> <rabbit:connection-factory id="rabbitConnectionFactory" username="${rabbit.user}" host="localhost" password="${rabbit.pass}" port="5672"/> <rabbit:template id="amqpTemplate" connection-factory="rabbitConnectionFactory" exchange="rmq.rube.exchange" routing-key="rube.key" channel-transacted="true"/><rabbit:queue name="rmq.rube.queue" durable="true"/><rabbit:direct-exchange name="rmq.rube.exchange" durable="true"> <rabbit:bindings> <rabbit:binding queue="rmq.rube.queue" key="rube.key"></rabbit:binding> </rabbit:bindings> </rabbit:direct-exchange></beans> This is a fairly simple configuration that :sets up a connection to a RabbitMQ server, creates a durable queue(if not available) creates a durable exchange and configures a binding to send messages to the exchange to be routed to the queue based on a routing key called “rube.key”This can be translated to the following @Configuration based java configuration: @Configuration public class RabbitConfig {@Autowired private ConnectionFactory rabbitConnectionFactory;@Bean DirectExchange rubeExchange() { return new DirectExchange("rmq.rube.exchange", true, false); }@Bean public Queue rubeQueue() { return new Queue("rmq.rube.queue", true); }@Bean Binding rubeExchangeBinding(DirectExchange rubeExchange, Queue rubeQueue) { return BindingBuilder.bind(rubeQueue).to(rubeExchange).with("rube.key"); }@Bean public RabbitTemplate rubeExchangeTemplate() { RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory); r.setExchange("rmq.rube.exchange"); r.setRoutingKey("rube.key"); r.setConnectionFactory(rabbitConnectionFactory); return r; } } This configuration should look much more simpler than the xml version of the configuration. I am cheating a little here though, you should be seeing a missing connectionFactory which is just being injected into this configuration, where is that coming from..this is actually part of a Spring Boot based application and there is a Spring Boot Auto configuration for RabbitMQ connectionFactory based on whether the RabbitMQ related libraries are present in the classpath. Here is the complete configuration if you are interested in exploring further – https://github.com/bijukunjummen/rg-si-rabbit/blob/master/src/main/java/rube/config/RabbitConfig.java References:Spring-AMQP project here Spring-Boot starter project using RabbitMQ hereReference: Spring @Configuration – RabbitMQ connectivity from our JCG partner Biju Kunjummen at the all and sundry blog....
netbeans-logo

NetBeans 8.0’s Five New Performance Hints

NetBeans 8.0 introduces several new Java hints. Although there are a large number of these new hints related to Java Persistence API, I focus on five new hints in the Performance category. The five new “Performance Hints” introduced with NetBeans 8.0 are:            Boxing of already boxed value Redundant String.toString() Replace StringBuffer/StringBuilder by String Unnecessary temporary during conversion from String Unnecessary temporary during conversion to StringEach of these five performance-related Java hints are illustrated in this post with screen snapshots taken from NetBeans 8.0 with code that demonstrates these hints. There are two screen snapshots for each hint, one each showing the text displayed when the cursor hovers over the line of code marked with yellow underlining and one each showing the suggested course of action to be applied to address that hint (shown when clicking on the yellow light bulb to the left of the flagged line). Some of the captured screen snapshots include examples of code that avoid the hint. Boxing of Already Boxed ValueRedundant String.toString()Replace StringBuffer/StringBuilder by StringUnnecessary Temporary During Conversion from StringUnnecessary Temporary During Conversion to String Unless I’ve done something incorrectly, there appears to be a minor bug with this hint in that it reports “Unnecessary temporary when converting from String” when, in this case, it should really be “Unnecessary temporary when converting to String”. This is not a big deal as the condition is flagged and the action to fix it seems appropriate.Conclusion The five performance-related hints introduced by NetBeans 8.0 and illustrated here can help Java developers avoid unnecessary object instantiations and other unnecessary runtime cost. Although the benefit of this optimization as shown in my simple examples is almost negligible, it could lead to much greater savings when used in code with loops that perform these same unnecessary instantiations thousands of times. Even without consideration of the performance benefit, these hints help to remind Java developers and teach developers new to Java about the most appropriate mechanisms for acquiring instances and primitive values.Reference: NetBeans 8.0’s Five New Performance Hints from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
groovy-logo

Groovy Goodness: Closure as a Class

When we write Groovy code there is a big chance we also write some closures. If we are working with collections for example and use the each, collect or find methods we use closures as arguments for these methods. We can assign closures to variables and use the variable name to reference to closure. But we can also create a subclass of the Closure class to implement a closure. Then we use an instance of the new closure class wherever a closure can be used. To write a closure as a class we must subclass Closure and implement a method with the name doCall. The method can accept arbitrary arguments and the return type can be defined by us. So we are not overriding a method doCall from the superclass Closure. But Groovy will look for a method with the name doCall to execute the closure logic and internally use methods from the Closure superclass. In the following sample we write a very simple closure as a class to check if an object is a number. Then we use an instance of the class with the findAll method for a collection of objects: ...
java-logo

Java 9 Behind the Scenes: Where Do New Features Come From?

Find out what’s going on behind the scenes of Java, and how new features come to life In a previous post we went over the new and pending features for the upcoming Java 9 release, and briefly mentioned the process that a new feature goes through before being added to the next release. Since this process affects almost all Java developers, but is less known to most of them, this post will focus on giving an insider’s view of Java (and how you can suggest that new feature you always wanted). We felt the best way to understand how new features come to life would be to ask the people who are responsible for bringing them into life. We spoke with 2 Java Executive Committee members, Gil Tene and Werner Keil, together with Richard Warburton, a London Java Community member, and asked them about new features in Java and what kind of new capabilities they’d like to see in the future. This post will cover the first part of the interview. But before we do that, here are the main players that take part in creating and voting on new features: Groups – Individuals and organisations with a mutual interest around a broad subject or a specific body of code. Some examples are Security, Networking, Swing, and HotSpot. Projects – Efforts to produce a body of code, documentation or other effort. Must be sponsored by at least one group. Recent examples are Project Lambda, Project Jigsaw, and Project Sumatra. JDK Enhancement Proposal (JEP) – Allows promoting a new specification informally before or in parallel to the JCP, when further exploration is needed. Unlike JSRs, may also contain features that have no specification-level visibility (e.g. new garbage collector or JIT implementation). Accepted JEPs become a part of the JDK roadmap and assigned a version number. Java Specification Request (JSR) – The actual specification of the feature happens in this stage, can be either coming through Groups/Projects, JEPs or from individual JCP (Java Community Process) members. An umbrella JSR is usually opened for each Java version (Also called a platform JSR), this has yet to happen with Java 9. Individual members of the community can also propose new Java specification requests. How do new features find their way into Java? Warburton: “The real answer is that someone wants the feature. That person could be an internal engineer or project manager at a big vendor or an outside member of the community. Either way it needs to be something that meets quite a strict criteria:Serious User Demand: It needs to be something that is a consensus benefit for the whole community. Example: Java SE 8 adds lambdas – this is a feature that has been argued about and demanded for years. Tried and Tested: Standards have to last a long time and its a very difficult and expensive process to modify standards which have already been established. The consequence is that the JCP (Java Community Process) isn’t bleeding edge. Its the place to go once technologies are ready for enterprise adoption. Not Unique to One Vendor: Standards need to be comfortable to all vendors. Example: weak/soft/phantom references interact with the garbage collectors, so they were specified in a way that tried to minimise the restrictions that they impose on GC Design.”“Once you’ve figured out that your feature is a good idea, you need to start the standardisation process. This involves raising a JSR – Java Specification Request – which is the atomic unit of changing Java. A JSR needs to be voted upon multiple times. Firstly to approve that its a good idea to start a JSR on this topic. Iteratively whenever a public review comes up to ensure that the JSR is headed on the right course. Finally when its time to approve the standard”. Tene: “Java has a long history of careful and deliberate enhancements over time. One of the things that still makes Java more successful than virtually all other programming languages and environments in history is its relative success in avoiding the rapid adoption of “the latest cool thing”, and its relative consistency as a platform. This is true across the platform as a whole (Java SE, EE, etc.) but is probably most clearly followed within the Java SE platform, where I focus most of my attention. Collections, NIO, Generics, Platform-optimized Concurrent Utilities, MethodHandles, and most recently Lambda expressions and streaming library support are all good examples of features that were added and then widely adopted over time, showing their true value to the platform and their importance as much more than a fleeting fashion.” “The JCP (Java Community Process) is responsible for capturing new features via JSRs. A successful individual, stand-alone JSR standardizes the semantics of a specific set of features or behaviors. But the ultimate success and adoption of a feature is usually demonstrated when it becomes a required part of a platform JSR, thereby becoming an integral part of the Java SE or Java EE platform. Since the creation of OpenJDK, we’ve seen much of the early stage work on features in Java SE move from being developed within JSRs to being developed within JEPs (JDK Enhancement Proposals). They all still end up being spec’d and completed as before, and become part of the Platform JSRs as well, but we see a lot more development-in-the-open, and a lot more experimentation (development of things that wouldn’t necessarily make it as JSRs).” Keil: “3 competing JSON libraries, one for Java EE, another Oracle proprietary one bundled with Java ME 8 and yet another independent JEP based approach for Java SE 9 is probably one of the best examples where this can go wrong and contrary to user’s and developer’s need or the aim of setting ONE standard for Java. Another one would be overlapping and largely incompatible Date/Time APIs introduced with Java SE 8 (JavaFX+JSR 310) while 2 other libraries existed before under “java.util”. Java architects provide input and recommend things, but looking at e.g. Date/Time API only the worst issues they or others (including a few Executive Committee members) pointed out were addressed, while other concerns they had were brushed away.” Can you share one personal experience you had with the Java Community Process? Keil: “A while ago myself and Co-Spec Lead Antoine Sabot-Durand proposed a JSR for standardized CDI based connectors to Social Media and similar APIs naturally based around JSON, REST or security standards like OAuth, too. The JSR was turned down by a slight majority of 8:5. Given Seam Social and the entire Seam ecosystem at Red Hat were dropped in favor of new projects just like the entire JBoss server got a new name and brand (WildFly) around that time, the resulting Open Source project Agorava was a natural fit to replace Seam Social and many ideas we had proposed for JSR 357.”  Tene: “As part of the JCP Executive Committee, I’ve had to weigh in on approving new JSRs. In more than one case I’ve voted to reject JSRs (and advocated for others to do the same) that I thought did not belong in the platform, but most JSRs that naturally fit into the Java ecosystem do not have too high a bar to cross as long as the JSR lead signs up for the detailed work and process involved”. Warburton: “I helped out a bit with the date and time library. I think it gave me a greater appreciation for the level of detail in which each unit of functionality or method signature needs to be thrashed out. People invest a lot of time in trying their best to get these APIs right.”Reference: Java 9 Behind the Scenes: Where Do New Features Come From? from our JCG partner Alex Zhitnitsky at the Takipi blog....
jsf-logo

WAI-ARIA support for AutoComplete widget

In this post I would like to discuss the accessibility for an AutoComplete widget. A typically AutoComplete widget provides suggestions while you type into the field. On my current work I implemented an JSF component on basis of Twitter’s Typeahead – a flexible JavaScript library that provides a strong foundation for building robust typeaheads. The Typeahead widget has a solid specification in form of pseudocode that details how the UI reacts to events. The Typeahed can show a hint in the corresponsing input field, like the google’s search field shows it, highlight matches, deal with custom datasets and precompiled template. Furthermore, the Bloodhound suggestion engine offers prefetching, intelligent caching, fast lookups, and backfilling with remote data.    Despite many features, one big shortcoming of the Typeahead is an insufficient WAI-ARIA support (I would say it was completely missing until now). An AutoComplete widget should be designed to be accessible out of the box to users of screen readers and other assistive tools. I have decided to add a fully WAI-ARIA support, done this taks and sent my pull request to the GitHub. Below is the new “WAI-ARIA aware” markup with an explanaition (not relevant HTML attributes are omitted). <input class="typeahead tt-hint" aria-hidden="true"><input class="typeahead tt-input" role="combobox" aria-autocomplete="list/both" aria-owns="someUniqueID" aria-activedescendant="set dynamically to someUniqueID-1, etc." aria-expanded="false/true"> <span id="someUniqueID" class="tt-dropdown-menu" role="listbox"> <div class="tt-dataset-somename" role="presentation"> ... <span class="tt-suggestions" role="presentation"> <div id="someUniqueID-1" class="tt-suggestion" role="option"> ... single suggestion ... </div> ... </span> ... </div> </span><span class="tt-status" role="status" aria-live="polite" style="border:0 none; clip:rect(0, 0, 0, 0); height:1px; width:1px; margin:-1px; overflow:hidden; padding:0; position:absolute;"> ... HTML string or a precompiled template ... </span> The first input field with the class tt-hint simulates a visual hint (s. the picture above). The hint completes the input query to the matched suggestion visually. The query can be completed to the suggestion (hint) by pressing either right arrow or tab key. The hint is not relevant for the screen readers, hence we can apply the aria-hidden=”true” to that field. The hint is ignored by screen readers then. Why is it not important? Because we will force reading the matched suggestion more intelligent by the “status” area with aria-live=”polite” (will be explained below). The next input field is the main element where the user inputs query. It should have a role=”combobox”. This is a recommended role for an AutoComplete. See the official WAI-ARIA docu for more details. In fact, the docu also shows a rough markup structure of an AutoComplete! The main input field should have various ARIA states and properties. aria-autocomplete=”list” indicates that the input provides autocomplete suggestions in the form of a list as the user types. aria-autocomplete=”both” indicates that suggestions are also provided by a hint (additional to a list). The property aria-owns indicates that there is a parent / child relationship between the input field and the list with suggestions. This property should be always set when the DOM hierarchy cannot be used to represent the relationship. Otherwise, screen readers will get a problem to find a list with suggestions. In our case, it points to the ID of the list. The most interesting property is aria-activedescendant. A sightless user navigates through the list via arrow keys. The property aria-activedescendant propagates changes in focus to assistive technology – it is adjusted to reflect the ID attribute of the current child element which has been navigated to. In the picture above, the item “Lawrence of Arabia” is selected (highlighted). aria-activedescendant is set to the ID of this item and screen readers read to blind users “Lawrence of Arabia”. Note: the focus stays on the input field, so that you can still edit the input value. I suggest to read more about this property in the Google’s Introduction to Web Accessibility.The property aria-expanded indicates whether the list with suggestions is expanded (true) or collapsed (false). This property will be updated automatically when the list’s state changes. The list with suggestions itself should have a role “listbox”. That means, the widget allows the user to select one or more items from a list of choices. role=”option” should be applied to individual result item nodes within the list. There is an interesting article “Use “listbox” and “option” roles when constructing AutoComplete lists”, which I suggest to read. Not important for the screen readers parts should be marked with role=”presentation”. This role says “My markup is only for non sightless users”. You probably ask, what is about the role=”application”? Is it important for us? Not really. I skipped it after reading “Not All ARIA Widgets Deserve role=”application””. The last element in the markup is a span with the role=”status” and the property aria-live=”polite”. What it is good for? You can spice up your widget by letting the user know that autocomplete results are available via a text that gets automatically spoken. The text to be spoken should be added by the widget to an element that is moved outside the viewport. This is the mentioned span element with applied styles. The styles are exactly the same as the jQuery CSS class ui-helper-hidden-accessible, which hides content visually, but leaves it available to assistive technologies. The property aria-live=”polite” on the span element means – updates within this element should be announced at the next graceful interval, such as when the user stops typing. Generally, The aria-live property indicates a section within the content that is live and the verbosity in which changes shall be announced. I defined the spoken text for the AutoComplete in my project as an JavaScript template compiled by Handlebars (any other templating engine such as Hogan can be used too). Handlebars.compile( '{{#unless isEmpty}}{{count}} suggestions available.' + '{{#if withHint}}Top suggestion {{hint}} can be chosen by right arrow or tab key.' + '{{/if}}{{/unless}}') When user stops to type and suggestions are shown, a screen reader reads the count of available suggestions and the top suggestion. Really nice. Last but not least is the testing. If you do not already have a screen reader installed, install the Google Chrome extensions ChromeVox and Accessibility Developer Tools. These are good tools for the development. Please watch a short ChromeVox demo and a demo for Accessibility Developer Tools too. Alternative, you can also try a free standalone screen reader NVDA. Simple give the tools a try.Reference: WAI-ARIA support for AutoComplete widget from our JCG partner Oleg Varaksin at the Thoughts on software development blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close