Featured FREE Whitepapers

What's New Here?


Injecting domain objects instead of infrastructure components

Dependency Injection is a widely used software design pattern in Java (and many other programming languages) that is used to achieve Inversion of Control. It promotes reusability, testability, maintainability and helps building loosely coupled components. Dependency Injection is the de facto standard to wire Java objects together, these days. Various Java Frameworks like Spring or Guice can help implementing Dependency Injection. Since Java EE 6 there is also an official Java EE API for Dependency Injection available: Contexts and Dependency Injection (CDI). We use Dependency Injection to inject services, repositories, domain related components, resources or configuration values. However, in my experience, it is often overlooked that Dependency Injection can also be used to inject domain objects. A typical example of this, is the way the currently logged in user is obtained in Java many applications. Usually we end up asking some component or service for the logged in user. The code for this might look somehow like the following snippet: public class SomeComponent {  @Inject   private AuthService authService;      public void workWithUser() {     User loggedInUser = authService.getLoggedInUser();     // do something with loggedInUser   } } Here a AuthService instance is injected into SomeComponent. Methods of SomeComponent now use the AuthService object to obtain an instance of the logged in user. However, instead of injecting AuthService we could inject the logged in user directly into SomeComponent. This could look like this: public class SomeComponent {  @Inject   @LoggedInUser   private User loggedInUser;      public void workWithUser() {     // do something with loggedInUser   } } Here the User object is directly injected into SomeComponent and no instance of AuthService is required. The custom annotation @LoggedInUser is used to avoid conflicts if more than one (managed) bean of type User exist. Both, Spring and CDI are capable of this type of injection (and the configuration is actually very similar). In the following section we will see how domain objects can be injected using Spring. After this, I will describe what changes are necessary to do the same with CDI. Domain object injection with Spring To inject domain objects like shown in the example above, we only have to do two little steps. First we have to create the @LoggedInUser annotation: import java.lang.annotation.*; import org.springframework.beans.factory.annotation.Qualifier;@Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @Qualifier public @interface LoggedInUser {} Please note the @Qualifier annotation which turns @LoggedInUser into a custom qualifier. Qualifiers are used by Spring to avoid conflicts if multiple beans of the same type are available. Next we have to add a bean definition to our Spring configuration. We use Spring’s Java configuration here, the same can be done with xml configuration. @Configuration public class Application {  @Bean   @LoggedInUser   @Scope(value = WebApplicationContext.SCOPE_SESSION, proxyMode = ScopedProxyMode.TARGET_CLASS)   public User getLoggedInUser() {     // retrieve and return user object from server/database/session   } } Inside getLoggedInUser() we have to retrieve and return an instance of the currently logged in user (e.g. by asking the AuthService from the first snippet). With @Scope we can control the scope of the returned object. The best scope depends on the domain objects and might differ among different domain objects. For a User object representing the logged in user, request or session scope would be valid choices. By annotating getLoggedInUser() with @LoggedInUser, we tell Spring to use this bean definition whenever a bean with type User annotated with @LoggedInUser should be injected. Now we can inject the logged in user into other components: @Component public class SomeComponent {  @Autowired   @LoggedInUser   private User loggedInUser;      ... } In this simple example the qualifier annotation is actually not necessary. As long as there is only one bean definition of type User available, Spring could inject the logged in user by type. However, when injecting domain objects it can easily happen that you have multiple bean definitions of the same type. So, using an additional qualifier annotation is a good idea. With their descriptive name qualifiers can also act as documentation (if named properly). Simplify Spring bean definitions When injecting many domain objects, there is chance that you end up repeating the scope and proxy configuration over and over again in your bean configuration. In such a situation it comes in handy that Spring annotations can be used on custom annotations. So, we can simply create our own @SessionScopedBean annotation that can be used instead of @Bean and @Scope: @Target({ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @Bean @Scope(value = WebApplicationContext.SCOPE_SESSION, proxyMode = ScopedProxyMode.TARGET_CLASS) public @interface SessionScopedBean {} Now we can simplify the bean definition to this: @Configuration public class Application {  @LoggedInUser   @SessionScopedBean   public User getLoggedInUser() {     ...   } } Java EE and CDI The configuration with CDI is nearly the same. The only difference is that we have to replace Spring annotations with javax.inject and CDI annotations. So, @LoggedInUser should be annotated with javax.inject.Qualifier instead of org.springframework.beans.factory.annotation.Qualifier (see: Using Qualifiers). The Spring bean definition can be replaced with a CDI Producer method. Instead of @Scope the appropriate CDI scope annotation can be used. At the injection point Spring’s @Autowired can be replaced with @Inject. Note that Spring also supports javax.inject annotations. If you add the javax.inject dependency to your Spring project, you can also use @Inject and @javax.inject.Qualifier. It is actually a good idea to do this because it reduces Spring dependencies in your Java code. Conclusion We can use custom annotations and scoped beans to inject domain objects into other components. Injecting domain objects can make your code easier to read and can lead to cleaner dependencies. If you only inject AuthService to obtain the logged in user, you actually depend on the logged in user and not on AuthService. On the downside it couples your code stronger to the Dependency Injection framework, which has to manage bean scopes for you. If you want to keep the ability to use your classes outside a Dependency Injection container this can be a problem. Which types of domain objects are suitable for injection highly depends on the application you are working on. Good candidates are domain objects you often use and which not depend on any method or request parameters. The currently logged in user is an object that might often be suitable for injection.You can find the source of the shown example on GitHub.Reference: Injecting domain objects instead of infrastructure components from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....

Spring @Configuration and injecting bean dependencies as method parameters

One of the ways Spring recommends injecting inter-dependencies between beans is shown in the following sample copied from the Spring’s reference guide here:                   @Configuration public class AppConfig {@Bean public Foo foo() { return new Foo(bar()); }@Bean public Bar bar() { return new Bar("bar1"); }} So here, bean `foo` is being injected with a `bar` dependency. However, there is one alternate way to inject dependency that is not documented well, it is to just take the dependency as a `@Bean` method parameter this way: @Configuration public class AppConfig {@Bean public Foo foo(Bar bar) { return new Foo(bar); }@Bean public Bar bar() { return new Bar("bar1"); }} There is a catch here though, the injection is now by type, the `bar` dependency would be resolved by type first and if duplicates are found, then by name: @Configuration public static class AppConfig {@Bean public Foo foo(Bar bar1) { return new Foo(bar1); }@Bean public Bar bar1() { return new Bar("bar1"); }@Bean public Bar bar2() { return new Bar("bar2"); } } In the above sample dependency `bar1` will be correctly injected. If you want to be more explicit about it, an @Qualifer annotation can be added in: @Configuration public class AppConfig {@Bean public Foo foo(@Qualifier("bar1") Bar bar1) { return new Foo(bar1); }@Bean public Bar bar1() { return new Bar("bar1"); }@Bean public Bar bar2() { return new Bar("bar2"); } } So now the question of whether this is recommended at all, I would say yes for certain cases. For e.g, had the bar bean been defined in a different @Configuration class , the way to inject the dependency then is along these lines: @Configuration public class AppConfig {@Autowired @Qualifier("bar1") private Bar bar1;@Bean public Foo foo() { return new Foo(bar1); }} I find the method parameter approach simpler here: @Configuration public class AppConfig {@Bean public Foo foo(@Qualifier("bar1") Bar bar1) { return new Foo(bar1); }} Thoughts?Reference: Spring @Configuration and injecting bean dependencies as method parameters from our JCG partner Biju Kunjummen at the all and sundry blog....

10 Hosted Continuous Integration Services for a Private Repository

Every project I’m working with starts with a setup of continuous integration pipeline. I’m a big fan of cloud services, that’s why I was always using travis-ci.org. A few of my clients questioned this choice recently, mostly because of the price. So I decided to make a brief analysis of the market. I configured rultor, an open source project, in every CI service I managed to find. All of them are free for open source projects. All of them are hosted and do not require any server installation Here they are, in order of my personal preference:        Linux Windows MacOStravis-ci.com $129/mo YES NO YESsnap-ci.com $80/mo YES NO NOsemaphoreapp.com $29/mo YES NO NOappveyer.com $39/mo NO YES NOshippable.com $1/mo YES NO NOwercker.com free! YES NO NOcodeship.io $49/mo YES NO NOmagnum-ci.com ? YES NO NOdrone.io $25/mo YES NO NOcircleci.io $19/mo YES NO NOsonolabs.com $15/mo YES NO NOhosted-ci.com $49/mo NO NO YESship.io free! YES NO YES  If you know any other good continuous integration services, email me, I’ll review and add them to this list. travis-ci.org is the best platform I’ve seen so far. Mostly because it is the most popular. Perfectly integrates with Github and has proper documentation. One important downside is the price of $129 per month. “With this money you can get a dedicated EC2 instance and install Jenkins there” — some of my clients say. I strongly disagree, since Jenkins will require a 24×7 administration, which costs way more than $129, but it’s always difficult to explain. snap-ci.com is a product of ThoughtWorks, an author of Go, an open source continuous integration server. It looks a bit more complicated than others, giving you an ability to define “stages” and combine them into pipelines. I’m not sure yet how these mechanisms may help in small and medium size projects we’re mostly working with, but they look “cool”.   semaphoreapp.com is easy to configure and work with. It makes an impression of a light-weight system, which I generally appreciate. As a downside, they don’t have any Maven pre-installed, but this was solved easily with a short custom script that downloads and unpacks Maven. Another downside is that they are not configurable through a file (like .travis.yml) — you should do everything through a UI. appveyor.com is the only one that runs Windows builds. Even though I’m working mostly with Java and Ruby, which are expected to be platform independent, they very often appear to be exactly the opposite. When your build succeedes on Linux, there is almost no guarantee it will pass on Windows or Mac. I’m planning to use appveyor in every project, in combination with some other CI service. I’m still testing it though…shippable.com was easy to configure since it understands .travis.yml out of the box. Besides that, nothing fancy.    wercker.com is a European product from Amsterdam, which is still in beta and that’s why free for all projects. The platform looks very promissing. It is still free for private repositories and is backed up by investments. I’m still testing it…  codeship.io works fine, but their web UI looks a bit out-dated. Anyway, I’m using them now, will see.    magnum-ci.com is a very lightweight and young system. It doesn’t connect automatically to Github, so you should do some manual operations of adding a web hook. Besides that, works just fine.  drone.io works fine, but their support didn’t reply to me when I asked for a Maven version update. Besides that, their badge is not updated correctly in Gitub README.md.  circleci.io I still don’t know why my build fails there. Really difficult to configure and understand what’s going on. Trying to figure it out… solanolabs.com testing now… hosted-ci.com testing now… ship.io testing now… Keep in mind that no matter how good and expensive your continuous integration service is, your quality won’t grow unless you make your master branch read-only. Related Posts You may also find these posts interesting:Deploying to Heroku, in One Click Deployment Script vs. Rultor How to Publish to Rubygems, in One Click How to Deploy to CloudBees, in One Click How to Release to Maven Central, in One ClickReference: 10 Hosted Continuous Integration Services for a Private Repository from our JCG partner Yegor Bugayenko at the About Programming blog....

Exploding Job Offers and Multiple Offer Synchronization

A recent post by Y Combinator’s Sam Altman, Exploding Offers Suck, detailed his distaste for accelerators and venture capitalists who pressure entrepreneurs into critical decisions on investment offers before they have a chance to shop. The article outlines Y Combinator’s policy of allowing offer acceptance up until the beginning of the program. An exploding offer is as any offer that lists a date for the offer to expire, with the allotted time being minimal. Altman’s article is about venture funding, but most in the industry gain exposure to this situation via job offers. This practice is fairly standard for college internships, where acceptance is required months before start date. Exploding offers may be less common for experienced professionals, but are hardly rare.   Many companies use templates for job offers where deadlines are arbitrary or listed only to encourage quick responses, which gives a false appearance of an exploding offer. Other firms have strict policies on enforcement, although strong candidates in a seller’s market will cause exceptions. Why exploding offers exist? The employer’s justification for exploding job offers may focus on planning, multiple candidates, and finite needs. If a company has three vacancies and three candidates, how likely is it that all three receive offers? What is the likelihood they all accept. Companies develop  a pipeline of perhaps twenty candidates for those three jobs. If six are found qualified, the company has a dilemma. The numbers and odds become ominous for firms evaluating thousands of college students for 100 internships. The exploding offer is one method for companies to mitigate the risk of accepted offers outnumbering vacancies. They are also used to ensure that the second or third best candidate will still be available while the hirer awaits response from the first choice. Fast acceptance of exploding offers may be viewed as a measure of a candidate’s interest in the position and company, particularly at smaller and riskier firms. Job seekers may feel that exploding offers serve to limit their employment options, with a potential side effect being lower salaries due to reduced competition. These offers may also help level the playing field for non-elite companies, as risk-averse candidates may subscribe to the bird in the hand theory. Since they are not uncommon, it’s important to consider a strategy for how to handle exploding offers, multiple offer scenarios, and how to prevent these problems altogether. Avoid the situation entirely The issue with exploding and multiple offers is time constraint, and job seekers need to be proactive about these scenarios. The best strategy is to maximize the possibility that all offers arrive at the same time. If all offers are received simultaneously there is no problem. Those applying to several firms should anticipate the possibility that offers may arrive over days or weeks. When researching companies of interest, investigate their standard interview process, the number of steps involved, and how long it takes. Current or former employees and recruiters representing the company will have answers, and when in doubt the general rule is that larger companies tend to move slower. Initiate contact with the slower companies first, and apply to the fastest hirers once interviews start with the first group. Strategies to control timing of multiple offers Unfortunately, job searches are unpredictable and candidates feel they have little influence on the speed or duration of a hiring process. Stellar candidates have much more control than they might expect, but even average applicants can affect timelines. If Company A has scheduled a third face-to-face meeting while Company B is just getting around to setting up a phone screen with HR, the candidate needs to slow the process with A while expediting B. What tactics can hasten or extend hiring processes in order to synchronize offers? Speeding upAsk – Asking about the anticipated duration of the interview process and about any ways it can be expedited. This is a relatively benign request so long as it is made respectfully and tactfully. Flexibility and availability – Provide prompt interview availability details regularly even they aren’t requested, and (if possible) offer flexibility to meet off-hours or on weekends. Pressure – As somewhat of a last resort, some candidates may choose to disclose the existence of other offers and the need for a decision by the employer. This can backfire and should be approached delicately.Slowing downDelay interviews – This is the easiest and most effective method to employ, with the risk being that the position may be offered to someone else in the interim. When multiple rounds are likely, adding a couple days between rounds can extend the process significantly. Ask questions – There are many details about a company that influence decisions to accept or reject offers, and the process of gathering that information takes time. At the offer stage, questions about benefits or policies can usually buy a day or two. Negotiate – Negotiating an offer will require the company to get approvals and to incorporate new terms into a letter. Request additional interviews or meetings – Once an offer is made, candidates feel pressure to accept or reject. Another option is to request additional dialogue and meetings to address any concerns and finalize decisions.Specifics for exploding offers The issue with exploding offers is typically the need to respond before other interviews are completed, so the goal is to buy time. Some candidates choose to accept the exploding offer as a backup in case a better offer isn’t made. This tactic isn’t optimal for either party, as the company may be without a replacement and the candidate has burned a bridge. In an exploding offer situation, first discover if the offer is truly exploding. As was mentioned earlier, many companies want a timely answer but don’t need one. The offer letter may give the appearance of being an exploding offer without actual intent. One response to test the waters is “The offer letter says I have x days to decide. Is that deadline firm or could it be extended a day or two if I am not prepared to make a decision at that point?”. The company’s answer will be telling. If it is discovered that it is truly an exploding offer, resorting to the tactics listed above could help. HR reps may be uncomfortable asking for a decision if they feel a candidate’s legitimate questions are unanswered. As the deadline approaches, negotiating terms and asking for more detail will provide time. The request for another meeting will require scheduling, and the parties involved might not be available until after the deadline. As a last resort, simply asking for an extension is always an option.Reference: Exploding Job Offers and Multiple Offer Synchronization from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Factory Without IF-ELSE

Object Oriented Language has very powerful feature of Polymorphism, it is used to remove if/else or switch case in code. Code without condition is easy to read. There are some places where you have to put them and one of such example is Factory/ServiceProvider class. I am sure you have seen factory class with IF-ELSEIF which keeps on getting big. In this blog i will share some techniques that you can be used to remove condition in factory class.   I will use below code snippet as example: public static Validator newInstance(String validatorType) { if ("INT".equals(validatorType)) return new IntValidator(); else if ("DATE".equals(validatorType)) return new DateValidator(); else if ("LOOKUPVALUE".equals(validatorType)) return new LookupValueValidator(); else if ("STRINGPATTERN".equals(validatorType)) return new StringPatternValidator(); return null; } Reflection This is first thing that comes to mind when you want to remove conditions. You get the feeling of framework developer! public static Validator newInstance(String validatorClass) { return Class.forName(validatorClass).newInstance(); } This looks very simple but only problem is caller has to remember fully qualified class name and some time it could be issue. Map Map can be used to to map actual class instance to some user friendly name: Map<String, Validator> validators = new HashMap<String,Validator>(){ { put("INT",new IntValidator()); put("LOOKUPVALUE",new LookupValueValidator()); put("DATE",new DateValidator()); put("STRINGPATTERN",new StringPatternValidator()); } }; public Validator newInstance(String validatorType) { return validators.get(validatorType); } This also looks neat without overhead of reflection. Enum This is interesting one: enum ValidatorType { INT { public Validator create() { return new IntValidator(); } }, LOOKUPVALUE { public Validator create() { return new LookupValueValidator(); } }, DATE { public Validator create() { return new DateValidator(); } }; public Validator create() { return null; } }public Validator newInstance(ValidatorType validatorType) { return validatorType.create(); } This method is using enum method to remove condition, one of the issue is that you need Enum for each type. You don’t want to create tons of them! I personally like this method. Conclusion If-else or switch case makes code difficult to understand, we should try to avoid them as much as possible. Language construct should be used to avoid some of switch case. We should try to code without IF-ELSE and that will force us to come up with better solution.Reference: Factory Without IF-ELSE from our JCG partner Ashkrit Sharma at the Are you ready blog....

Ceylon 1.1.0 is now available

Ten whole months in the making, this is the biggest release of Ceylon so far! Ceylon 1.1.0 incorporates oodles of enhancements and bugfixes, with well over 1400 issues closed. Ceylon is a modern, modular, statically typed programming language for the Java and JavaScript virtual machines. The language features a flexible and very readable syntax, a unique and uncommonly elegant static type system, a powerful module architecture, and excellent tooling, including an awesome Eclipse-based IDE. Ceylon enables the development of cross-platform modules that execute portably in both virtual machine environments. Alternatively, a Ceylon module may target one or the other platform, in which case it may interoperate with native code written for that platform.   For the end user, the most significant improvements in Ceylon 1.1 are:performance enhancements, especially to compilation times in the IDE, even smoother interoperation with Java overloading and Java generics, out of the box support for deployment of Ceylon modules on OSGi containers, enhancements to the Ceylon SDK, including the new platform modules ceylon.promise, ceylon.locale, and ceylon.logging, along with many improvements to ceylon.language, ceylon.collection, and ceylon.test, many new features and improvements in Ceylon IDE, including ceylon.formatter, a high-quality code formatter written in Ceylon, support for command line tool plugins, including the new ceylon format and ceylon build plugins, and integration with vert.x.A longer list of changes may be found here. In the box This release includes:a complete language specification that defines the syntax and semantics of Ceylon in language accessible to the professional developer, a command line toolset including compilers for Java and JavaScript, a documentation compiler, and support for executing modular programs on the JVM and Node.js, a powerful module architecture for code organization, dependency management, and module isolation at runtime, the language module, our minimal, cross-platform foundation of the Ceylon SDK, and a full-featured Eclipse-based integrated development environment.Language Ceylon is a highly understandable object-oriented language with static typing. The language features:an emphasis upon readability and a strong bias toward omission or elimination of potentially-harmful or potentially-ambiguous constructs and toward highly disciplined use of static types, an extremely powerful and uncommonly elegant type system combining subtype and parametric polymorphism with:first-class union and intersection types, both declaration-site and use-site variance, and the use of principal types for local type inference and flow-sensitive typing,a unique treatment of function and tuple types, enabling powerful abstractions, along with the most elegant approach to null of any modern language, first-class constructs for defining modules and dependencies between modules, a very flexible syntax including comprehensions and support for expressing tree-like structures, and fully-reified generic types, on both the JVM and JavaScript virtual machines, and a unique typesafe metamodel.More information about these language features may be found in the feature list and quick introduction. This release introduces the following new language features:support for use-site variance, enabling complete interop with Java generics, dynamic interfaces, providing a typesafe way to interoperate with dynamically typed native JavaScript code, type inference for parameters of anonymous functions that occur in an argument list, and a Byte class that is optimized by the compiler.Language module The language module was a major focus of attention in this release, with substantial performance improvements, API optimizations, and new features, including the addition of a raft of powerful operations for working with streams. The language module now includes an API for deploying Ceylon modules programmatically from Java. The language module is now considered stable, and no further breaking changes to its API are contemplated. Command line tools The ceylon command now supports a plugin architecture. For example, type: ceylon plugin install ceylon.formatter/1.1.0To install the ceylon format subcommand. IDE This release of the IDE features dramatic improvements to build performance, and introduces many new features, including:a code formatter, seven new refactorings and many improvements to existing refactorings, many new quick fixes/assists, IntelliJ-style “chain completion” and completion of toplevel functions applying to a value, a rewritten Explorer view, with better presentation of modules and modular dependencies, synchronization of all keyboard accelerators with JDT equivalents, Quick Find References, Recently Edited Files, Format Block, Visualize Modular Dependencies, Open in Type Hierarchy View, Go to Refined Declaration, and much more.SDK The platform modules, recompiled for 1.1.0, are available in the shared community repository, Ceylon Herd. This release introduces the following new platform modules:ceylon.promise, cross-platform support for promises, ceylon.locale, a cross-platform library for internationalization, and ceylon.logging, a simple logging API.In addition, there were many improvements to ceylon.collection, which is now considered stable, and to ceylon.test. The Ceylon SDK is available from Ceylon Herd, the community module repository. Vert.x integration mod-lang-ceylon implements Ceylon 1.1 support for Vert.x 2.1.x, and may be downloaded here. Community The Ceylon community site, http://ceylon-lang.org, includes documentation, and information about getting involved. Source code The source code for Ceylon, its specification, and its website is freely available from GitHub. Issues Bugs and suggestions may be reported in GitHub’s issue tracker. Acknowledgement We’re deeply indebted to the community volunteers who contributed a substantial part of the current Ceylon codebase, working in their own spare time. The following people have contributed to this release: Gavin King, Stéphane Épardaud, Tako Schotanus, Emmanuel Bernard, Tom Bentley, Aleš Justin, David Festal, Max Rydahl Andersen, Mladen Turk, James Cobb, Tomáš Hradec, Ross Tate, Ivo Kasiuk, Enrique Zamudio, Roland Tepp, Diego Coronel, Daniel Rochetti, Loic Rouchon, Matej Lazar, Lucas Werkmeister, Akber Choudhry, Corbin Uselton, Julien Viet, Stephane Gallès, Paco Soberón, Renato Athaydes, Michael Musgrove, Flavio Oliveri, Michael Brackx, Brent Douglas, Lukas Eder, Markus Rydh, Julien Ponge, Pete Muir, Henning Burdack, Nicolas Leroux, Brett Cannon, Geoffrey De Smet, Guillaume Lours, Gunnar Morling, Jeff Parsons, Jesse Sightler, Oleg Kulikov, Raimund Klein, Sergej Koščejev, Chris Marshall, Simon Thum, Maia Kozheva, Shelby, Aslak Knutsen, Fabien Meurisse, Sjur Bakka, Xavier Coulon, Ari Kast, Dan Allen, Deniz Türkoglu, F. Meurisse, Jean-Charles Roger, Johannes Lehmann, Alexander Altman, allentc, Nikolay Tsankov, Chris Horne, gabriel-mirea, Georg Ragaller, Griffin DeJohn, Harald Wellmann, klinger, Luke, Oliver Gondža, Stephen Crawley.Reference: Ceylon 1.1.0 is now available from our JCG partner Gavin King at the Ceylon Team blog blog....

WildFly subsystem for RHQ Metrics

For RHQ-Metrics I have started writing a subsystem for WildFly 8 that is able to collect metrics inside WildFly and then send them at regular intervals (currently every minute) to a RHQ-Metrics server. The next graph is a visualization with Grafana of the outcome when this sender was running for 1.5 days in a row:          (It is interesting to see how the JVM is fine tuning its memory requirement over time and using less and less memory for this constant workload). The following is a visualization of the setup:The sender is running as a subsystem inside WildFly and reading metrics from the WildFly management api. The gathered metrics are then pushed via REST to RHQ-Metrics. Of course it is possible to send them to a RHQ-Metrics server that is running on a separate host. The configuration of the subsystem looks like this: <subsystem xmlns="urn:org.rhq.metrics:wildflySender:1.0"> <rhqm-server name="localhost" enabled="true" port="8080" token="0x-deaf-beef"/> <metric name="non-heap" path="/core-service=platform-mbean/type=memory" attribute="non-heap-memory-usage"/> <metric name="thread-count" path="/core-service=platform-mbean/type=threading" attribute="thread-count"/> </subsystem> As you see, the path to the DMR resource and the name of the attribute to be monitored as metrics can be given in the configuration. The implementation is still basic at the moment – you can find the source code in the RHQ-Metrics repository on GitHub. Contributions are very welcome. Heiko Braun and Harald Pehl are currently working on optimizing the scheduling with individual intervals and possible batching of requests for managed servers in a domain. Many thanks go to Emmanuel Hugonnet, Kabir Khan and especially Tom Cerar for their help to get me going with writing a subsystem, which was pretty tricky for me. The parsers, the object model and the XML had a big tendency to disagree with each other !Reference: WildFly subsystem for RHQ Metrics from our JCG partner Heiko Rupp at the Some things to remember blog....

Beginner’s Guide to Hazelcast Part 2

This article continues the series that I have started featuring Hazelcast, a distributed, in-memory database.  If one has not read the first post, please click here. Distributed Collections Hazelcast has a number of distributed collections that can be used to store data.  Here is a list of them:        IList ISet IQueueIList IList is a collection that keeps the order of what is put in and can have duplicates. In fact, it implements the java.util.List interface. This is not thread safe and one must use some sort of mutex or lock to control access by many threads. I suggest Hazelcast’s ILock. ISet ISet is a collection that does not keep order of the items placed in it. However, the elements are unique. This collection implements the java.util.Set interface. Like ILists, this collection is not thread safe. I suggest using the ILock again. IQueue IQueue is a collection that keeps the order of what comes in and allows duplicates. It implements the java.util.concurrent.BlockingQueue so it is thread safe. This is the most scalable of the collections because its capacity grows as the number of instances go up. For instance, lets say there is a limit of 10 items for a queue. Once the queue is full, no more can go in there unless another Hazelcast instance comes up, then another 10 spaces are available. A copy of the queue is also made.  IQueues can also be persisted via implementing the interface QueueStore. What They Have in Common All three of them implement the ICollection interface. This means one can add an ItemListener to them.  This lets one know when an item is added or removed. An example of this is in the Examples section. Scalablity As scalability goes, ISet and IList don’t do that well in Hazelcast 3.x. This is because the implementation changed from being map based  to becoming a collection in the MultiMap. This means they don’t partition and don’t go beyond a single machine. Striping the collections can go a long way or making one’s own that are based on the mighty IMap. Another way is to implement Hazelcast’s spi. Examples Here is an example of an ISet, IList and IQueue. All three of them have an ItemListener. The ItemListener is added in the hazelcast.xml configuration file. One can also add an ItemListener programmatically for those inclined. A main class and the snippet of configuration file that configured the collection will be shown. CollectionItemListener I implemented the ItemListener interface to show that all three of the collections can have an ItemListener. Here is the implementation: package hazelcastcollections;import com.hazelcast.core.ItemEvent; import com.hazelcast.core.ItemListener;/** * * @author Daryl */ public class CollectionItemListener implements ItemListener {@Override public void itemAdded(ItemEvent ie) { System.out.println(“ItemListener – itemAdded: ” + ie.getItem()); }@Override public void itemRemoved(ItemEvent ie) { System.out.println(“ItemListener – itemRemoved: ” + ie.getItem()); }} ISet Code package hazelcastcollections.iset;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.ISet;/** * * @author Daryl */ public class HazelcastISet {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); ISet<String> set = instance.getSet(“set”); set.add(“Once”); set.add(“upon”); set.add(“a”); set.add(“time”);ISet<String> set2 = instance2.getSet(“set”); for(String s: set2) { System.out.println(s); }System.exit(0); }} Configuration <set name=”set”> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> </set> IList Code package hazelcastcollections.ilist;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IList;/** * * @author Daryl */ public class HazelcastIlist {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); IList<String> list = instance.getList(“list”); list.add(“Once”); list.add(“upon”); list.add(“a”); list.add(“time”);IList<String> list2 = instance2.getList(“list”); for(String s: list2) { System.out.println(s); } System.exit(0); }} Configuration <list name=”list”> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> </list>  IQueue Code I left this one for last because I have also implemented a QueueStore.   There is no call on IQueue to add a QueueStore.  One has to configure it in the hazelcast.xml file. package hazelcastcollections.iqueue;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IQueue;/** * * @author Daryl */ public class HazelcastIQueue {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); IQueue<String> queue = instance.getQueue(“queue”); queue.add(“Once”); queue.add(“upon”); queue.add(“a”); queue.add(“time”);IQueue<String> queue2 = instance2.getQueue(“queue”); for(String s: queue2) { System.out.println(s); }System.exit(0); }} QueueStore Code package hazelcastcollections.iqueue;import com.hazelcast.core.QueueStore; import java.util.Collection; import java.util.Map; import java.util.Set; import java.util.TreeMap; import java.util.TreeSet; /** * * @author Daryl */ public class QueueQStore implements QueueStore<String> {@Override public void store(Long l, String t) { System.out.println(“storing ” + t + ” with ” + l); }@Override public void storeAll(Map<Long, String> map) { System.out.println(“store all”); }@Override public void delete(Long l) { System.out.println(“removing ” + l); }@Override public void deleteAll(Collection<Long> clctn) { System.out.println(“deleteAll”); }@Override public String load(Long l) { System.out.println(“loading ” + l); return “”; }@Override public Map<Long, String> loadAll(Collection<Long> clctn) { System.out.println(“loadAll”); Map<Long, String> retMap = new TreeMap<>(); return retMap; }@Override public Set<Long> loadAllKeys() { System.out.println(“loadAllKeys”); return new TreeSet<>(); }} Configuration Some mention needs to be addressed when it comes to configuring the QueueStore.  There are three properties that do not get passed to the implementation.  The binary property deals with how Hazelcast will send the data to the store.  Normally, Hazelcast stores the data serialized and deserializes it before it is sent to the QueueStore.  If the property is true, then the data is sent serialized.  The default is false.  The memory-limit is how many entries are kept in memory before being put into the QueueStore.  A 10000 memory-limit means that the 10001st is being sent to the QueueStore.  At initialization of the IQueue, entries are being loaded from the QueueStore.  The bulk-load property is how many can be pulled from the QueueStore at a time. <queue name=”queue”> <max-size>10</max-size> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> <queue-store> <class-name>hazelcastcollections.iqueue.QueueQStore</class-name> <properties> <property name=”binary”>false</property> <property name=”memory-limit”>10000</property> <property name=”bulk-load”>500</property> </properties> </queue-store> </queue>  Conclusion I hope one has learned about distributed collections inside Hazelcast.  ISet, IList and IQueue were discussed.  The ISet and IList only stay on the instance that they are created while the IQueue has a copy made, can be persisted and its capacity increases as the number of instances increase.  The code can be seen here. References The Book of Hazelcast: www.hazelcast.com Hazelcast Documentation (comes with the hazelcast download)Reference: Beginner’s Guide to Hazelcast Part 2 from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....

Using Asciidoctor with Spring: Rendering Asciidoc Documents with Spring MVC

Asciidoc is a text based document format, and that is why it is very useful if we want to commit our documents into a version control system and track the changes between different versions. This makes Asciidoc a perfect tool for writing books, technical documents, FAQs, or user’s manuals. After we have created an Asciidoc document, the odds are that we want to publish it, and one way to do this is to publish that document on our website. Today we will learn how we can transform Asciidoc documents into HTML by using AsciidoctorJ and render the created HTML with Spring MVC. The requirements of our application are:It must support Asciidoc documents that are found from the classpath. It must support Asciidoc markup that is given as a String object. It must transform the Asciidoc documents into HTML and render the created HTML. It must “embed” the created HTML to the layout of our application.Let’s start by getting the required dependencies with Maven. Getting the Required Dependencies with Maven We can get the required dependencies with Maven by following these steps:Enable the Spring IO platform. Configure the required dependencies.First, we can enable the Spring IO platform by adding the following snippet to our POM file: <dependencyManagement> <dependencies> <dependency> <groupId>io.spring.platform</groupId> <artifactId>platform-bom</artifactId> <version>1.0.2.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Second, we can configure the required dependencies by following these steps:Configure the logging dependencies in the pom.xml file. Add the spring-webmvc dependency to the pom.xml file. Add the Servlet API dependency to the POM file. Configure the the Sitemesh (version 3.0.0) dependency in the POM file. Sitemesh ensures that every page of our application uses a consistent look and feel. Add asciidoctorj dependency (version 1.5.0) to the pom.xml file. AsciidoctorJ is a Java API for Asciidoctor and we use it to transform Asciidoc documents into HTML.The relevant part of our pom.xml file looks as follows: <dependencies> <!-- Logging --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </dependency> <!-- Spring --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> </dependency> <!-- Java EE --> <dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <scope>provided</scope> </dependency> <!-- Sitemesh --> <dependency> <groupId>org.sitemesh</groupId> <artifactId>sitemesh</artifactId> <version>3.0.0</version> </dependency> <!-- AsciidoctorJ --> <dependency> <groupId>org.asciidoctor</groupId> <artifactId>asciidoctorj</artifactId> <version>1.5.0</version> </dependency> </dependencies> Because we use the Spring IO Platform, we don’t have to specify the dependency versions of the artifacts that are part of the Spring IO Platform. Let’s move on and start implementing our application. Rendering Asciidoc Documents with Spring MVC We can fulfil the requirements of our application by following these steps:Configure our web application and the Sitemesh filter. Implement the view classes that are responsible of transforming Asciidoc documents into HTML and rendering the created HTML. Implement the controller methods that use the created view classes.Let’s get started. Configuring Sitemesh The first thing that we have to do is to configure Sitemesh. We can configure Sitemesh by following these three steps:Configure the Sitemesh filter in the web application configuration. Create the decorator that is used to create consistent look and feel for our application. Configure the decorator that is used to by the Sitemesh filter.First, we have to configure the Sitemesh filter in our web application configuration. We can configure our web application by following these steps:Create a WebAppConfig class that implements the WebApplicationInitializer interface. Implement the onStartup() method of the WebApplicationInitializer interface by following these steps:Create an AnnotationConfigWebApplicationContext object and configure it to process our application context configuration class. Configure the dispatcher servlet. Configure the Sitemesh filter to process the HTML returned by the JSP pages of our application and all controller methods that use the url pattern ‘/asciidoctor/*’ Add a new ContextLoaderListener object to the ServletContext. A ContextLoaderListener is responsible of starting and shutting down the Spring WebApplicationContext.The source code of the WebAppConfig class looks as follows (Sitemesh configuration is highlighted): import org.sitemesh.config.ConfigurableSiteMeshFilter; import org.springframework.web.WebApplicationInitializer; import org.springframework.web.context.ContextLoaderListener; import org.springframework.web.context.WebApplicationContext; import org.springframework.web.context.support.AnnotationConfigWebApplicationContext; import org.springframework.web.servlet.DispatcherServlet;import javax.servlet.DispatcherType; import javax.servlet.FilterRegistration; import javax.servlet.ServletContext; import javax.servlet.ServletException; import javax.servlet.ServletRegistration; import java.util.EnumSet;public class WebAppConfig implements WebApplicationInitializer {private static final String DISPATCHER_SERVLET_NAME = "dispatcher";private static final String SITEMESH3_FILTER_NAME = "sitemesh"; private static final String[] SITEMESH3_FILTER_URL_PATTERNS = {"*.jsp", "/asciidoctor/*"};@Override public void onStartup(ServletContext servletContext) throws ServletException { AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext(); rootContext.register(WebAppContext.class);configureDispatcherServlet(servletContext, rootContext); configureSitemesh3Filter(servletContext);servletContext.addListener(new ContextLoaderListener(rootContext)); }private void configureDispatcherServlet(ServletContext servletContext, WebApplicationContext rootContext) { ServletRegistration.Dynamic dispatcher = servletContext.addServlet( DISPATCHER_SERVLET_NAME, new DispatcherServlet(rootContext) ); dispatcher.setLoadOnStartup(1); dispatcher.addMapping("/"); }private void configureSitemesh3Filter(ServletContext servletContext) { FilterRegistration.Dynamic sitemesh = servletContext.addFilter(SITEMESH3_FILTER_NAME, new ConfigurableSiteMeshFilter() ); EnumSet<DispatcherType> dispatcherTypes = EnumSet.of(DispatcherType.REQUEST, DispatcherType.FORWARD ); sitemesh.addMappingForUrlPatterns(dispatcherTypes, true, SITEMESH3_FILTER_URL_PATTERNS); } }If you want to take a look at the application context configuration class of the example application, you can get it from Github.Second, we have to create the decorator that provides consistent look and feel for our application. We can do this by following these steps:Create the decorator file to the src/main/webapp/WEB-INF directory. The decorator file of our example application is called layout.jsp. Add the HTML that provides the consistent look and feel to the created decorator file. Ensure that Sitemesh adds the title found from the returned HTML to the HTML that is rendered by the web browser. Configure Sitemesh to add the HTML elements found from the head of the returned HTML to the head of the rendered HTML. Ensure that Sitemesh adds the body found from the returned HTML to the HTML that is shown to the user.The source code of our decorator file (layout.jsp) looks as follows (the parts that are related to Sitemesh are highlighted): <!doctype html> <%@ page contentType="text/html;charset=UTF-8" language="java" %> <html> <head> <title><sitemesh:write property="title"/></title> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" type="text/css" href="${contextPath}/static/css/bootstrap.css"/> <link rel="stylesheet" type="text/css" href="${contextPath}/static/css/bootstrap-theme.css"/> <script type="text/javascript" src="${contextPath}/static/js/jquery-2.1.1.js"></script> <script type="text/javascript" src="${contextPath}/static/js/bootstrap.js"></script> <sitemesh:write property="head"/> </head> <body> <nav class="navbar navbar-inverse" role="navigation"> <div class="container-fluid"> <!-- Brand and toggle get grouped for better mobile display --> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> </div> <div class="collapse navbar-collapse"> <ul class="nav navbar-nav"> <li><a href="${contextPath}/">Document list</a></li> </ul> </div> </div> </nav> <div class="container-fluid"> <sitemesh:write property="body"/> </div> </body> </html> Third, we have to configure Sitemesh to use the decorator file that we created in the second step. We can do this by following these steps:Create a sitemesh3.xml file to the src/main/webapp/WEB-INF directory. Configure Sitemesh to use our decorator for all requests that are processed by the Sitemesh filter.The sitemesh3.xml file looks as follows: <sitemesh> <mapping path="/*" decorator="/WEB-INF/layout/layout.jsp"/> </sitemesh> That is it. We have now configured Sitemesh to provide consistent look and feel for our application. Let’s move on and find out how we can implement the view classes that transform Asciidoc markup into HTML and render the created HTML. Implementing the View Classes Before we can start implementing the view classes that transform Asciidoc markup into HTML and render the created HTML, we have to take a quick look at our requirements. The requirements that are relevant for this step are:Our solution must support Asciidoc documents that are found from the classpath. Our solution must support Asciidoc markup that is given as a String object. Our solution must transform the Asciidoc documents into HTML and render the created HTML.These requirements suggest that we should create three view classes. These view classes are described in the following:We should create an abstract base class that contains the logic that transforms Asciidoc markup into HTML and renders the created HTML. We should create a view class that can read the Asciidoc markup from a file that is found from the classpath. We should create a view class that can read the Asciidoc markup from a String object.In other words, we have to create the following class structure:First, we have to implement the AbstractAsciidoctorHtmlView class. This class is an abstract base class that transforms Asciidoc markup into HTML and renders the created HTML. We can implement this class by following these steps:Create the AbstractAsciidoctorHtmlView class and extend the AbstractView class. Add a constructor to the created class and set the content type of the view to ‘text/html’. Add a protected abstract method getAsciidocMarkupReader() to created class and set its return type to Reader. The subclasses of this abstract class must implement this method, and the implementation of this method must return a Reader object that can be used to read the rendered Asciidoc markup. Add a private getAsciidoctorOptions() method to the created class and implement it by returning the configuration options of Asciidoctor. Override the renderMergedOutputModel() method of the AbstractView class, and implement it by transforming the Asciidoc document into HTML and rendering the created HTML.The source code of the AbstractAsciidoctorHtmlView class looks as follows: import org.asciidoctor.Asciidoctor; import org.asciidoctor.Options; import org.springframework.http.MediaType; import org.springframework.web.servlet.view.AbstractView;import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.Reader; import java.io.Writer; import java.util.Map;public abstract class AbstractAsciidoctorHtmlView extends AbstractView {public AbstractAsciidoctorHtmlView() { super.setContentType(MediaType.TEXT_HTML_VALUE); }protected abstract Reader getAsciidocMarkupReader();@Override protected void renderMergedOutputModel(Map<String, Object> model, HttpServletRequest request, HttpServletResponse response) throws Exception { //Set the content type of the response to 'text/html' response.setContentType(super.getContentType());Asciidoctor asciidoctor = Asciidoctor.Factory.create(); Options asciidoctorOptions = getAsciidoctorOptions();try ( //Get the reader that reads the rendered Asciidoc document //and the writer that writes the HTML markup to the request body Reader asciidoctorMarkupReader = getAsciidocMarkupReader(); Writer responseWriter = response.getWriter(); ) { //Transform Asciidoc markup into HTML and write the created HTML //to the response body asciidoctor.render(asciidoctorMarkupReader, responseWriter, asciidoctorOptions); } }private Options getAsciidoctorOptions() { Options asciiDoctorOptions = new Options(); //Ensure that Asciidoctor includes both the header and the footer of the Asciidoc //document when it is transformed into HTML. asciiDoctorOptions.setHeaderFooter(true); return asciiDoctorOptions; } } Second, we have to implement the ClasspathFileAsciidoctorHtmlView class. This class can read the Asciidoc markup from a file that is found from the classpath. We can implement this class by following these steps:Create the ClasspathFileAsciidoctorHtmlView class and extend the AbstractAsciidoctorHtmlView class. Add a private String field called asciidocFileLocation to the created class. This field contains the location of the Asciidoc file that is transformed into HTML. This location must be given in a format that is understood by the getResourceAsStream() method of the Class class. Create a constructor that takes the location the location of the rendered Asciidoc file as a constructor argument. Implement the constructor by calling the constructor of the superclass and storing the location of the rendered Asciidoc file to the asciidocFileLocation field. Override the getAsciidocMarkupReader() method and implement it by returning a new InputStreamReader object that is used to read the Asciidoc file found from the classpath.The source code of the ClasspathFileAsciidoctorHtmlView class looks as follows: import java.io.InputStreamReader; import java.io.Reader;public class ClasspathFileAsciidoctorHtmlView extends AbstractAsciidoctorHtmlView {private final String asciidocFileLocation;public ClasspathFileAsciidoctorHtmlView(String asciidocFileLocation) { super(); this.asciidocFileLocation = asciidocFileLocation; }@Override protected Reader getAsciidocMarkupReader() { return new InputStreamReader(this.getClass().getResourceAsStream(asciidocFileLocation)); } } Third, we have to implement the StringAsciidoctorHtmlView class that can read the Asciidoc markup from a String object. We can implement this class by following these steps:Create the StringAsciidoctorHtmlView class and extend the AbstractAsciidoctorHtmlView class. Add a private String field called asciidocMarkup to the created class. This field contains the Asciidoc markup that is transformed into HTML. Create a constructor that takes the rendered Asciidoc markup as a constructor argument. Implement this constructor by calling the constructor of the superclass and setting the rendered Asciidoc markup to the asciidocMarkup field. Override the getAsciidocMarkupReader() method and implement it by returning a new StringReader object that is used to read the Asciidoc markup stored to the asciidocMarkup field.The source code of the StringAsciidoctorHtmlView looks as follows: import java.io.Reader; import java.io.StringReader;public class StringAsciidoctorHtmlView extends AbstractAsciidoctorHtmlView {private final String asciidocMarkup;public StringAsciidoctorHtmlView(String asciidocMarkup) { super(); this.asciidocMarkup = asciidocMarkup; }@Override protected Reader getAsciidocMarkupReader() { return new StringReader(asciidocMarkup); } } We have now created the required view classes. Let’s move on and find out how we can use these classes in a Spring MVC web application. Using the Created View Classes Our last step is to create the controller methods that use the created view classes. We have to implement two controllers methods that are described in the following:The renderAsciidocDocument() method processes GET requests send to the url ‘/asciidoctor/document’, and it transforms an Asciidoc document into HTML and renders the created HTML. The renderAsciidocString() method processes GET get requests send to the url ‘/asciidoctor/string’, and it transforms an Asciidoc String into HTML and renders the created HTML.The source code of the AsciidoctorController class looks as follows: import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.servlet.ModelAndView; @Controller public class AsciidoctorController { private static final String ASCIIDOC_FILE_LOCATION = "/asciidoctor/document.adoc"; private static final String ASCIIDOC_STRING = "= Hello, AsciiDoc (String)!\n" + "Doc Writer <doc@example.com>\n" + "\n" + "An introduction to http://asciidoc.org[AsciiDoc].\n" + "\n" + "== First Section\n" + "\n" + "* item 1\n" + "* item 2\n" + "\n" + "1\n" + "puts \"Hello, World!\""; @RequestMapping(value = "/asciidoctor/document", method = RequestMethod.GET) public ModelAndView renderAsciidocDocument() { //Create the view that transforms an Asciidoc document into HTML and //renders the created HTML. ClasspathFileAsciidoctorHtmlView docView = new ClasspathFileAsciidoctorHtmlView(ASCIIDOC_FILE_LOCATION); return new ModelAndView(docView); } @RequestMapping(value = "/asciidoctor/string", method = RequestMethod.GET) public ModelAndView renderAsciidocString() { //Create the view that transforms an Asciidoc String into HTML and //renders the created HTML. StringAsciidoctorHtmlView stringView = new StringAsciidoctorHtmlView(ASCIIDOC_STRING); return new ModelAndView(stringView); } }Additional Information:The Javadoc of the @Controller annotation The Javadoc of the @RequestMapping annotation The Javadoc of the ModelAndView classWe have now created the controller methods that use our view classes. When the user of our application invokes a GET request to the url ‘/asciidoctor/document’, the source code of rendered HTML page looks as follows (the parts created by Asciidoctor are highlighted): <!doctype html><html> <head> <title>Hello, AsciiDoc (File)!</title> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" type="text/css" href="/static/css/bootstrap.css"/> <link rel="stylesheet" type="text/css" href="/static/css/bootstrap-theme.css"/> <script type="text/javascript" src="/static/js/jquery-2.1.1.js"></script> <script type="text/javascript" src="/static/js/bootstrap.js"></script> <meta charset="UTF-8"> <!--[if IE]><meta http-equiv="X-UA-Compatible" content="IE=edge"><![endif]--> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="generator" content="Asciidoctor 1.5.0"> <meta name="author" content="Doc Writer"><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Open+Sans:300,300italic,400,400italic,600,600italic|Noto+Serif:400,400italic,700,700italic|Droid+Sans+Mono:400"> <link rel="stylesheet" href="./asciidoctor.css"></head> <body> <nav class="navbar navbar-inverse" role="navigation"> <div class="container-fluid"> <!-- Brand and toggle get grouped for better mobile display --> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> </div> <div class="collapse navbar-collapse"> <ul class="nav navbar-nav"> <li><a href="/">Document list</a></li> </ul> </div> </div> </nav> <div class="container-fluid"> <div id="header"> <h1>Hello, AsciiDoc (File)!</h1> <div class="details"> <span id="author" class="author">Doc Writer</span><br> <span id="email" class="email"><a href="mailto:doc@example.com">doc@example.com</a></span><br> </div> </div> <div id="content"> <div id="preamble"> <div class="sectionbody"> <div class="paragraph"> <p>An introduction to <a href="http://asciidoc.org">AsciiDoc</a>.</p> </div> </div> </div> <div class="sect1"> <h2 id="_first_section">First Section</h2> <div class="sectionbody"> <div class="ulist"> <ul> <li> <p>item 1</p> </li> <li> <p>item 2</p> </li> </ul> </div> <div class="listingblock"> <div class="content"> <pre class="highlight"><code class="language-ruby" data-lang="ruby">puts "Hello, World!"</code></pre> </div> </div> </div> </div> </div> <div id="footer"> <div id="footer-text"> Last updated 2014-09-21 14:21:59 EEST </div> </div></div> </body> </html> As we can see, the HTML created by Asciidoctor is embedded into our layout which provides a consistent user experience to the users of our application. Let’s move on and evaluate the pros and cons of this solution. Pros and Cons The pros of our solution are:The rendered HTML documents share the same look and feel than the other pages of our application. This means that we can provide a consistent user experience to the users of our application. We can render both static files and strings that can be loaded from a database.The cons of our solution are:The war file of our simple application is huge (51.9 MB). The reason for this is that even though Asciidoctor has a Java API, it is written in Ruby. Thus, our application needs two big jar files:The size of the asciidoctorj-1.5.0.jar file is 27.5MB. The size of the jruby-complete-1.7.9.jar file is 21.7MB.Our application transforms Asciidoc documents into HTML when the user requests them. This has a negative impact to the response time of our controller methods because the bigger the document, the longer it takes to process it. The first request that renders an Asciidoc document as HTML is 4-5 times slower than the next requests. I didn’t profile the application but I assume that JRuby has got something to do with this. At the moment it is not possible to use this technique if we want to transform Asciidoc documents into PDF documents.Let’s move on and summarize what we have learned from this blog post. Summary This blog post has taught us three things:We learned how we can configure Sitemesh to provide a consistent look and feel for our application. We learned how we can create the view classes that transform Asciidoc documents into HTML and render the created HTML. Even though our solution works, it has a lot of downsides that can make it unusable in real life applications.The next part of this tutorial describes how we can solve the performance problems of this solution. P.S. If you want play around with the example application of this blog post, you can get it from Github.Reference: Using Asciidoctor with Spring: Rendering Asciidoc Documents with Spring MVC from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....

Getting Started with Docker

If the numbers of articles, meetups, talk submissions at different conferences, tweets, and other indicators are taken into consideration, then seems like Docker is going to solve world hunger. It would be nice if it would, but apparently not. But it does solve one problem really well! Lets hear it from @solomonstre – creator of Docker project!            In short, Docker simplifies software delivery by making it easy to build and share images that contain your application’s entire environment, or application operating system. What does it mean by application operating system ? Your application typically require a specific version of operating system, application server, JDK, database server, may require to tune the configuration files, and similarly multiple other dependencies. The application may need binding to specific ports and certain amount of memory. The components and configuration together required to run your application is what is referred to as application operating system. You can certainly provide an installation script that will download and install these components. Docker simplifies this process by allowing to create an image that contains your application and infrastructure together, managed as one component. These images are then used to create Docker containers which run on the container virtualization platform, provided by Docker. What are the main components of Docker ? Docker has two main components:Docker: the open source container virtualization platform Docker Hub: SaaS platform for sharing and managing Docker imagesDocker uses Linux Containers to provide isolation, sandboxing, reproducibility, constraining resources, snapshotting and several other advantages. Read this excellent piece at InfoQ on Docker Containers for more details on this. Images are “build component” of Docker and a read-only template of application operating system. Containers are runtime representation, and created from, images. They are “run component” of Docker. Containers can be run, started, stopped, moved, and deleted. Images are stored in a registry, the “distribution component” of Docker. Docker in turn contains two components:Daemon runs on a host machine and does the heavy lifting of building, running, and distributing Docker containers. Client is a Docker binary that accepts commands from the user and communicates back and forth with daemonHow do these work together ? Client communicates with Daemon, either co-located on the same host, or on a different host. It requests the Daemon to pull an image from the repository using pull command. The Daemon then downloads the image from Docker Hub, or whatever registry is configured. Multiple images can be downloaded from the registry and installed on Daemon host.Client can then start the Container using run command. The complete list of client commands can be seen here. Client communicates with Daemon using sockets or REST API. Because Docker uses Linux Kernel features, does that mean I can use it only on Linux-based machines ? Docker daemon and client for different operating systems can be installed from docs.docker.com/installation/. As you can see, it can be installed on a wide variety of platforms, including Mac and Windows. For non-Linux machines, a lightweight Virtual Machine needs to be installed and Daemon is installed within that. A native client is then installed on the machine that communicates with the Daemon. Here is the log from booting Docker daemon on Mac: bash unset DYLD_LIBRARY_PATH ; unset LD_LIBRARY_PATH mkdir -p ~/.boot2docker if [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi /usr/local/bin/boot2docker init /usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):2375 docker version ~> bash ~> unset DYLD_LIBRARY_PATH ; unset LD_LIBRARY_PATH ~> mkdir -p ~/.boot2docker ~> if [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi ~> /usr/local/bin/boot2docker init 2014/07/16 09:57:13 Virtual machine boot2docker-vm already exists ~> /usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):2375 2014/07/16 09:57:13 Waiting for VM to be started... ....... 2014/07/16 09:57:35 Started. 2014/07/16 09:57:35 To connect the Docker client to the Docker daemon, please set: 2014/07/16 09:57:35 export DOCKER_HOST=tcp:// ~> docker version Client version: 1.1.1 Client API version: 1.13 Go version (client): go1.2.1 Git commit (client): bd609d2 Server version: 1.1.1 Server API version: 1.13 Go version (server): go1.2.1 Git commit (server): bd609d2 For example, Docker Daemon and Client can be installed on Mac following the instructions at docs.docker.com/installation/mac. The VM can be stopped from the CLI as: boot2docker stop And then restarted again as: boot2docker boot And logged in as: boot2docker ssh The complete list of boot2docker commands are available in help: ~> boot2docker help Usage: boot2docker [] []boot2docker management utility.Commands: init Create a new boot2docker VM. up|start|boot Start VM from any states. ssh [ssh-command] Login to VM via SSH. save|suspend Suspend VM and save state to disk. down|stop|halt Gracefully shutdown the VM. restart Gracefully reboot the VM. poweroff Forcefully power off the VM (might corrupt disk image). reset Forcefully power cycle the VM (might corrupt disk image). delete|destroy Delete boot2docker VM and its disk image. config|cfg Show selected profile file settings. info Display detailed information of VM. ip Display the IP address of the VM's Host-only network. status Display current state of VM. download Download boot2docker ISO image. version Display version information. Enough talk, show me an example ? Some of the JBoss projects are available as Docker images at www.jboss.org/docker and can be installed following the commands explained on that page. For example, WildFly Docker image can be installed as: ~> docker pull jboss/wildfly Pulling repository jboss/wildfly 2f170f17c904: Download complete 511136ea3c5a: Download complete c69cab00d6ef: Download complete 88b42ffd1f7c: Download complete fdbe853b54e1: Download complete bc93200c3ba0: Download complete 0daf76299550: Download complete 3a7e1274035d: Download complete e6e970a0db40: Download complete 1e34f7a18753: Download complete b18f179f7be7: Download complete e8833789f581: Download complete 159f5580610a: Download complete 3111b437076c: Download complete The image can be verified using the command: ~> docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE jboss/wildfly latest 2f170f17c904 8 hours ago 1.048 GB Once the image is downloaded, the container can be started as: docker run jboss/wildfly By default, Docker containers do not provide an interactive shell and input from STDIN. So if WildFly Docker container is started using the command above, it cannot be terminated using Ctrl + C.  Specifying -i option will make it interactive and -t option allocated a pseudo-TTY. In addition, we’d also like to make the port 8080 accessible outside the container, i.e. on our localhost. This can be achieved by specifying -p 80:8080 where 80 is the host port and 8080 is the container port. So we’ll run the container as: docker run -i -t -p 80:8080 jboss/wildfly =========================================================================JBoss Bootstrap EnvironmentJBOSS_HOME: /opt/wildflyJAVA: javaJAVA_OPTS: -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true=========================================================================22:08:29,943 INFO [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final 22:08:30,200 INFO [org.jboss.msc] (main) JBoss MSC version 1.2.2.Final 22:08:30,297 INFO [org.jboss.as] (MSC service thread 1-6) JBAS015899: WildFly 8.1.0.Final "Kenny" starting 22:08:31,935 INFO [org.jboss.as.server] (Controller Boot Thread) JBAS015888: Creating http management service using socket-binding (management-http) 22:08:31,961 INFO [org.xnio] (MSC service thread 1-7) XNIO version 3.2.2.Final 22:08:31,974 INFO [org.xnio.nio] (MSC service thread 1-7) XNIO NIO Implementation Version 3.2.2.Final 22:08:32,057 INFO [org.wildfly.extension.io] (ServerService Thread Pool -- 31) WFLYIO001: Worker 'default' has auto-configured to 16 core threads with 128 task threads based on your 8 available processors 22:08:32,108 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 32) JBAS010280: Activating Infinispan subsystem. 22:08:32,110 INFO [org.jboss.as.naming] (ServerService Thread Pool -- 40) JBAS011800: Activating Naming Subsystem 22:08:32,133 INFO [org.jboss.as.security] (ServerService Thread Pool -- 45) JBAS013171: Activating Security Subsystem 22:08:32,178 INFO [org.jboss.as.jsf] (ServerService Thread Pool -- 38) JBAS012615: Activated the following JSF Implementations: [main] 22:08:32,206 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 46) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique. 22:08:32,348 INFO [org.jboss.as.security] (MSC service thread 1-3) JBAS013170: Current PicketBox version=4.0.21.Beta1 22:08:32,397 INFO [org.jboss.as.webservices] (ServerService Thread Pool -- 48) JBAS015537: Activating WebServices Extension 22:08:32,442 INFO [org.jboss.as.connector.logging] (MSC service thread 1-13) JBAS010408: Starting JCA Subsystem (IronJacamar 1.1.5.Final) 22:08:32,512 INFO [org.wildfly.extension.undertow] (MSC service thread 1-9) JBAS017502: Undertow 1.0.15.Final starting 22:08:32,512 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017502: Undertow 1.0.15.Final starting 22:08:32,570 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3) 22:08:32,660 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-10) JBAS010417: Started Driver service with driver-name = h2 22:08:32,736 INFO [org.jboss.remoting] (MSC service thread 1-7) JBoss Remoting version 4.0.3.Final 22:08:32,836 INFO [org.jboss.as.naming] (MSC service thread 1-15) JBAS011802: Starting Naming Service 22:08:32,839 INFO [org.jboss.as.mail.extension] (MSC service thread 1-15) JBAS015400: Bound mail session 22:08:33,406 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017527: Creating file handler for path /opt/wildfly/welcome-content 22:08:33,540 INFO [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017525: Started server default-server. 22:08:33,603 INFO [org.wildfly.extension.undertow] (MSC service thread 1-8) JBAS017531: Host default-host starting 22:08:34,072 INFO [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017519: Undertow HTTP listener default listening on / 22:08:34,599 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-11) JBAS015012: Started FileSystemDeploymentService for directory /opt/wildfly/standalone/deployments 22:08:34,619 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-9) JBAS010400: Bound data 22:08:34,781 INFO [org.jboss.ws.common.management] (MSC service thread 1-13) JBWS022052: Starting JBoss Web Services - Stack CXF Server 4.2.4.Final 22:08:34,843 INFO [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on 22:08:34,844 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on 22:08:34,845 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.1.0.Final "Kenny" started in 5259ms - Started 184 of 233 services (81 services are lazy, passive or on-demand) Container’s IP address can be found as: ~> boot2docker ipThe VM's Host only interface IP address is: The started container can be verified using the command: ~> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b2f8001164b0 jboss/wildfly:latest /opt/wildfly/bin/sta 46 minutes ago Up 12 minutes 8080/tcp, 9990/tcp sharp_pare And now the WildFly server can now be accessed on your local machine as and looks like as shown:Finally the container can be stopped by hitting Ctrl + C, or giving the command as: ~> docker stop b2f8001164b0 b2f8001164b0 The container id obtained from “docker ps” is passed to the command here. More detailed instructions to use this image, such as booting in domain mode, deploying applications, etc. can be found at github.com/jboss/dockerfiles/blob/master/wildfly/README.md. What else would you like to see in the WildFly Docker image ? File an issue at github.com/jboss/dockerfiles/issues. Other images that are available at jboss.org/docker are:KeyCloak TorqueBox Immutant LiveOak AeroGearReference: Getting Started with Docker from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: