Featured FREE Whitepapers

What's New Here?


Using Java API for WebSockets in JDeveloper 12.1.3

Introduction The latest release of JDeveloper 12c ( along with WebLogic Server 12.1.3 came up with some new Java EE 7 features. One of them is support of JSR 356 Java API for WebSockets. Actually the WebSocket Protocol (RFC 6455) has been supported starting from release, but it was based on WebLogic specific implementation of the WebSocket API. Now this proprietary WebLogic Server WebSocket API has been deprecated. However, it is still supported for backward compatibility. In this post I am going to show an example of using JSR 356 Java API for WebSockets in a simple  ADF application. The use case is about some sailing regatta which takes place in the Tasman Sea. There are three boats participating in the regatta and they are going to cross the Tasman Sea sailing from Australia to New Zealand coast. The goal of the sample application is to monitor the regatta and inform users about how it is going on, showing the positions of the boats on a map. We’re going to declare a WebSocket server endpoint in the application and when a user opens a page a Java script function opens a new WebSocket connection. The application uses a scheduled service which every second updates boats coordinates and sends a message containing new boats positions to all active WebSocket clients. On the client side a Java script function receives the message and adds markers to the Google map according to the GPS coordinates. So, each user, interested in the regatta, is going to see the same updated picture representing the current status of the competition. WebSocket server endpoint Let’s start with declaring a WebSocket server endpoint. There is a small issue in the current implementation, which probably will be resolved in future releases. The WebSocket endpoints can not be mixed with ADF pages and they should be deployed in a separate WAR file. The easiest way to do that is to create a separate WebSocket project within the application and to declare all necessary endpoints in this project:This is also important to set up a readable Java EE Web Context Root for the project:The next step is to create a Java class which is going to be a WebSocket end point. So, this is a usual class with a special annotation at the very beginning: @ServerEndpoint(value = "/message") public class MessageEndPoint {    public MessageEndPoint() {         super();     } } Note, that JDeveloper underlines the annotation with red. We are going to fix the issue by letting JDeveloper configure the project for Web Socket.Having done that, JDeveloper is going to convert the project into a Web project adding the Web.xml file and add necessary library:Furthermore, the endpoint class becomes runnable and we can just run it so as to check how it actually works:In response JDeveloper generates the following URL at which the WebSocket endpoint is available. Note, that the URL contains the project context root (WebSocket) and the value property of the annotation (/message). If everything is ok then when we click the URL, we’ll get the “Connected successfuly” information window:By the way, there is a typo in the message. And now let’s add some implementation to the WebSocket endpoint class. According to the specification a new instance of the MessageEndPoint class is going to be created for each WebSocket connection. In order to hold a bunch of all active WebSocket sessions we’re going to use a static queue: public class MessageEndPoint {     //A new instance of the MessageEndPoint class     //is going to be created for each WebSocket connection     //This queue contains all active WebSocket sessions     final static Queue<Session> queue = new ConcurrentLinkedQueue<>();    @OnOpen      public void open(Session session) {          queue.add(session);              }       @OnClose       public void closedConnection(Session session) {          queue.remove(session);       }           @OnError      public void error(Session session, Throwable t) {            queue.remove(session);            t.printStackTrace();        } The annotated methods open, closedConnection and error are going to be invoked respectively when a new connection has been established, when it has been closed and when something wrong has happened. As we have done that, we can use some static method to broadcast a text message to all clients:      public static void broadCastTex(String message) {         for (Session session : queue) {             try {                session.getBasicRemote().sendText(message);             } catch (IOException e) {                 e.printStackTrace();             }         }    } In our use case we have to notify users with new GPS coordinates of the boats, so we should be able to send via WebSockets something more complex than just text messages. Sending an object Basically, a business model of the sample application is represented by two plain Java classes Boat: public class Boat {   private final String country;   private final double startLongitude;   private final double startLatitude;  private double longitude;   private double latitude;    public String getCountry() {       return country;   }  public double getLongitude() {       return longitude;   }  public double getLatitude() {       return latitude;   }     public Boat(String country, double longitude, double latitude) {       this.country = country;       this.startLongitude = longitude;       this.startLatitude = latitude;   } ... and Regatta: public class Regatta {     private final Boat[] participants = new Boat[] {         new Boat("us", 151.644, -33.86),         new Boat("ca", 151.344, -34.36),         new Boat("nz", 151.044, -34.86)     };         public Boat[] getParticipants() {         return participants;     } ...For our use case we’re going to send an instance of the Regatta class to the WebSocket clients. The Regatta contains all regatta participants represented by the Boat class instances containing updated GPS coordinates (longitude and latitude). This can be done by creating a custom implementation of the Encoder.Text<Regatta> interface, or in other words we’re going to create an encoder which can transform a Regatta instance into a text and specify this encoder to be used by the WebSocket endpoint while sending an instance of the Regatta. public class RegattaTextEncoder implements Encoder.Text<Regatta> {   @Override   public void init(EndpointConfig ec) { }  @Override   public void destroy() { }  private JsonObject encodeBoat(Boat boat) throws EncodeException {       JsonObject jsonBoat = Json.createObjectBuilder()           .add("country", boat.getCountry())           .add("longitude", boat.getLongitude())           .add("latitude" , boat.getLatitude()).build();            return jsonBoat;    }    @Override    public String encode(Regatta regatta) throws EncodeException {       JsonArrayBuilder arrayBuilder = Json.createArrayBuilder();                       for (Boat boat : regatta.getParticipants()) {           arrayBuilder.add(encodeBoat(boat));       }      return arrayBuilder.build().toString();    }       } @ServerEndpoint(   value = "/message",   encoders = {RegattaTextEncoder.class })Having done that, we can send objects to our clients:     public static void sendRegatta(Regatta regatta) {         for (Session session : queue) {             try {                 session.getBasicRemote().sendObject(regatta);             } catch (EncodeException e) {                 e.printStackTrace();             } catch (IOException e) {                 e.printStackTrace();             }         }    } The RegattaTextEncoder represents a Regatta object as a list of boats using Json notation, so it is going to be something like this: [{"country":"us","longitude":151.67,"latitude":-33.84},{"country":"ca", ...},{"country":"nz", ...}]Receiving a message On the client side we use a Java script function to open a new WebSocket connection: //Open a new WebSocket connection //Invoked on page load function connectSocket() {    websocket = new WebSocket(getWSUri());      websocket.onmessage = onMessage;   } And when a message arrives, we’re going to loop over array of boats and for each boat add a marker on the map: function onMessage(evt) {   var boats = JSON.parse(evt.data);   for (i=0; i<boats.length; i++) {      markBoat(boats[i]);    }   }function markBoat(boat) {   var image = '../resources/images/'+boat.country+'.png';   var latLng = new google.maps.LatLng(boat.latitude,boat.longitude);      mark = new google.maps.Marker({            position: latLng,            map: map,            title: boat.country,            icon: image         }); }You can learn down here how to integrate Google maps into your applications. Run the regatta In order to emulate a live show we use ScheduledExecutorService. Every second we are going to update GPS coordinates and broadcast the update to all subscribers: private final ScheduledExecutorService scheduler =    Executors.newScheduledThreadPool(1); private ScheduledFuture<?> runHandle;//Schedule a new regatta on Start button click public void startRegatta(ActionEvent actionEvent) {    //Cancel the previous regatta     if (runHandle != null) {         runHandle.cancel(false);      }               runHandle = scheduler.scheduleAtFixedRate(new RegattaRun(), 1, 1,                                               TimeUnit.SECONDS); }public class RegattaRun implements Runnable {    private final static double FINISH_LONGITUDE = 18;     private final Regatta regatta = new Regatta();    //Every second update GPS coordinates and broadcast     //new positions of the boats     public void run() {                   regatta.move();        MessageEndPoint.sendRegatta(regatta);                 if (regatta.getLongitude() >= FINISH_LONGITUDE) {            runHandle.cancel(true);              }     } } Bet on your boat And finally, the result of our work looks like this:The sample application for this post requires JDeveloper 12.1.3. Have fun! That’s it!Reference: Using Java API for WebSockets in JDeveloper 12.1.3 from our JCG partner Eugene Fedorenko at the ADF Practice blog....

JavaOne 2014: Conferences conflict with contractual interests

The Duke’s Street Cafe where engineers can have a hallway conversation on the street.                    Incompatible with contracting My eleventh JavaOne conference (11 = 10 + 1, 2004 to 2014) was splendid. It was worth attending this event and meeting all the people involved in the community. Now here comes the gentleman’s but. My attendance came at some cost beyond the financial obvious, hotel and plane ticket. It appears going to conferences are seriously incompatible with the motivations around business of contracts. One cannot have freedom and escape obligation to professional work. Despite, all of the knowledge that we have learned as professional developers, designers and architects, if your client requires you to be on site and you are not around, it can be taken that attending conferences like JavaOne 2014 in certain minds is taken as an illustrious and salubrious adventure for your own benefit. On the hand this is fair assessment, a client pays a contractor to be available, around for a burning need, and it is balanced with team work, morale; and deadline and commitments. At the back of mind, there are two schools of thought. One way is not to care too much about clients, but then a contractor will find they have a devalued reputation and lack of repeat business. The other way is never to take time off or away from project work for a client and then rely on contracts ending or finishing exactly before or after a major conference like JavaOne. So what to do in 2015? How can I resolve contracting and conferences? I believe the answer, obviously, to reduce the conferences that I actually attend to the minimum that I can unfortunately. It means that I will consider whether JavaOne 2015 is going to be viable or not. The keynote question and answer session with a Twitter hashtag, #j1qa, which obviously has long expired, featuring John Rose(far left), James Gosling (inner left), Brian Goetz (middle), Brian Oliver (inner left) and Charles Nutter (far right). The chair was Mark Reinhold.Ten years ago, when I worked investment banking in the good times. I could pretty much rely on 6 months J2EE contracts lasting as long as that term. At Credit Suisse bank, I managed six month contract renewals at a breeze as long as I perform and finished project work on time. In 2014, the climate is more restrictive, the pressure on high profile projects and the uncertainty of business means that contracts lengths are typically 3 months to start with and that cannot guarantee renewals, and if you think that permanent employment solves the dilemma then you are incorrect. A contract is a temporary and by definition that implies a contractor is treated as a temporary resource, but a permanent person can also be removed at short notice in the United Kingdom, if you have less than two years with the employer. When you think that the typical IT employment last about two to three years before somebody changes job, then you can see even permanent people have to be extremely careful with their holiday planning and entitlement. Yes you are entitled 25 days or more, but if you fail to give forewarning and mess around with the program delivery managers project plan too much, don’t be surprise if a ton of bricks eventually comes tumbling down. A picture with the Java Mascot to complete the collection. I wonder if Duke has a sixth sense and if she/he/it can sense the trouble ahead lurking in my subconscious.  Frustrating as it is, and still more than year from the next JavaOne conference, the next one is late October 2015, from Sunday 25th to Thursday 29th, I can’t say with real confidence that I will be there. I will, of course, submit some Calls For Papers, when the time approaches, but it will be dependent on client requirements if I can attend or not. If I do attend, then I probably cannot stick around California and see friends. Even for the UK and European conferences, I can only see trouble ahead with more conflicts. I already decided that I will not be at Devoxx in Belgium. There are also issues when the conference planning is late, the confirmations are validated less than three months before the event, project managers are already looking at their schedules for resourcing and if a contractor is going to disappear, then they are more easily replaced with somebody who will be around to fix their present pain, which is what work is more often than not about. I have found that client’s typical do not have the attitude of kindness, it is about the budget and time. That’s is the way the business world is running now, and the only conference speakers who can give up the time as the developer advocates, the people who paid to speak or promote at conferences. Independents are finding it harder and there will be no improvement in this situation. I just can’t find seem to find that benevolent, technology loving and business client, who understand me for what I am. These guys and gals at Alderbaran electronics with their NAO robots are inspirational. This is photo from the JavaOne demo grounds and exhibition.Reference: JavaOne 2014: Conferences conflict with contractual interests from our JCG partner Peter Pilgrim at the Peter Pilgrim’s blog blog....

The Heroes of Java: Dan Allen

The “Heroes of Java” series took a long break. Honestly, I thought it might end in the middle of nowhere, even if there are still so many people I would love to include here. One of them is Dan. The first time I asked him to contribute is almost one and a half year back and with everything which happened in the meantime, I made my peace with not getting an answer anymore. But the following arrived in my inbox during JavaOne and was basically a birthday present for me. So, I open the Heroes of Java book again today and add another chapter to it! Thank you Dan! It is very good to call you a friend!     Dan Allen Dan Allen is an open source and standards advocate and innovator. He worked at Red Hat as a Principal Software Engineer. In that role, he served as the Arquillian community manager, contributed to various open source projects (including Arquillian, Asciidoctor, Awestruct and JBoss Forge) and participated in the JCP. He helped a variety of open source projects become wildly successful. He’s also the author of Seam in Action (Manning, 2008), has written technical articles for various publications and is an internationally recognized speaker. General Who are you? I’m an open source advocate and developer, community catalyst, author, speaker and business owner. Currently, I’m working to improve the state of documentation by leading the Asciidoctor project, advocating for better software quality by advocating for Arquillian, and, generally, doing whatever I can to make the open source projects to which I contribute, and their communities, wildly successful. After a long conference day, you’ll likely find me geeking out with fellow community members over a Trappist beer. Your official job title at your company? Vice President, Open Source Hacker and Community Strategist at OpenDevise, a consulting firm I founded with Sarah White. Do you care about it? I care more about this title, compared to titles I’ve had in the past, primarily because I got to define it. In general, though, titles can be pretty meaningless. Take my previous title, Middleware Principal Software Engineer. All titles like this really manage to accomplish is to communicate an employee’s pay grade. The honorary that follows “Principal” is “Senior Principal”. Then, what next? “Principal Principal?” What was I before? A Junior Insignificant Engineer? We might as well just use number grades like in the US government (e.g. GS-10). At least that’s a logical system. Like many of my peers, I’ve always sought to define my own title for my role. To me, the purpose of a title is to help others know your specialty and focus. That way, they know when you’re the one they need to seek out. That’s why I chose the title “Open Source Hacker and Community Strategist” I live and breathe open source, so the “Open Source” part of the title fits. If you want to discuss anything about open source, I’m always game. I also love community, especially passionate ones. I’m always thinking about it and how to make it work better. That’s where the term “community strategist” comes in. I enjoy getting people excited about a technology and then being there to help get them going when they find their passion to improve or innovate on it. It’s such a thrilling and proud experience for both sides. To me, that feeling is called open source. I simply work to reproduce it over and over as an “Open Source Hacker and Community Strategist”. Maybe one day people will recognize me as a “Serial Community Creator” ! Those of us in open source also identify ourselves by the projects we lead or help manage, if any. Currently, I’m the Asciidoctor project lead—​and it’s about as much as I can handle. Do you speak foreign languages? Which ones? I wish. I studied French in high school, but consider that experience purely academic. I’m challenging myself to read tweets in French to brush up on what I once knew. My real life experience with foreign languages comes from interacting with open source community members from around the globe and spending time in other countries. Even though I cannot understand other languages, I enjoy taking in the sounds and rhythms like music. There’s a certain amount of enjoyment I get from listening without the distraction of comprehension. My favorite foreign language experience was working with the translations—​and their translators—​of the Arquillian User Guides. Not only did it expose me to a lot of languages (over a dozen), it gave me a first-hand appreciation for how much language plays into a person’s identity and the feeling of pride for one’s country. The experience also pushed me to understand Unicode and fonts. I’m proud to say that I get the whole point of Unicode and how it works (at least from a programming standpoint). I look forward to working more with translations, rethinking how translations are managed and continuing to take in the sounds and rhythms of languages. One day, perhaps, I will be fluent in at least one of them. How long is your daily “bootstrap” process? A more interesting question might be “when?” since I keep some pretty odd hours. My daily goal is usually to get to bed before the sun comes up. That makes my breakfast and bootstrap process your lunch. That all depends on timezone, of course. As one of my colleagues pointed out, I’m surprisingly non-Vampirish at conferences. You may be wondering what’s with the crazy schedule. The thing about managing an open source project is that you never know when someone is going to be ready to participate. When someone shows up ready to participate, you need to jump on the opportunity. It could be a while (if ever) before they have time again. And that person could be in any time zone in the world. Truth be told, I like the night just as much as the day anyway. There’s a solitude at night that I enjoy and I often do some of my best work then. Other times, I just enjoy the silence. I look forward to the day too, especially when the view of the Colorado Rockies is clear. I do some of my best work against the backdrop of their purple or white peaks. You might say that I draw inspiration from both the day and night to feed my creativity. I only do coffee first thing in my “morning”, but I do the other bootstrap activities (like Twitter) several times a day. It takes me about an hour or two to sift through my e-mail and Twitter, with a pit stop at Google+. Twitter You have a twitter handle? Why? For sure. It’s @mojavelinux. I have a Twitter account:to be open to connect to discover to report to keep in touchWhen I first started using Twitter (over 6 years ago), many people thought it was ridiculous and pointless. I was drawn to it because it offered a way to communicate without any prior arrangements. It’s sort of like a global IRC channel with a contextual filter applied to it. Twitter has changed the way I do business, and the way I interact with my colleagues and community. Rather try to explain it, I’ll give two examples. When we were growing the Seam 3 community, we didn’t just wait for people to come join the mailinglist. We looked for people talking about JSF and Java EE on Twitter. One of the more vocal people at that time was Brian Leathem. When he posted feedback or a complaint about JSF, we would engage him by responding to him directly. That turned his post into the start of a conversation or design session. When it came time to hire someone for a related position, he was already a top candidate, and has since become a top employee. There are stories like Brian’s. It’s easy to conclude that we “hired someone we met on Twitter”. That misses the whole point. Twitter’s public channel gave us an opportunity to find someone who has deep interest and experience with a particular technology or platform. So public that we don’t even have to know where to look for each other (except on Twitter). The meetup is inevitable. Twitter has also eliminated the overhead of communicating with associates in your own company or even other companies. You just put out a broadcast on Twitter, usually planting a few trigger words or tags, and that person will see it, or someone will pass it on to that person. Either way, you cut out the whole hassle of an employee directory. There’s a global conversation happening on Twitter and we’re all a part of it. Now that’s open. Whom are you following in general? First and foremost, my fellow community members. As I mentioned, Twitter is how I keep the pulse on my community and communicate with them throughout the day. I follow a few company and project feeds, such as GitHub and Java EE, but mostly I like to know there is a person behind the account. I’m hesitant about following anyone I haven’t met, either in person or through a conversation online. I follow the same policy for LinkedIn and Google+ as well. Do you have a personal “policy” for twitter? One policy is to stay dialed in. I plow thorough my timeline at least once a day and try to respond to any questions I’m asked. As a community leader, it’s important to be present and participate in the global conversation. Some days, I iron out my agenda only after consulting my stream. I do make sure to not let it take over (sort of). When I find myself only reading or retweeting, but not sharing, I realize I need to get back to creating so that I have something to share (or just take a break). I’m very careful to post and retweet useful information. That’s an important part of my personal policy. I use tools like Klout, the Twitter mentions tab and the new Twitter analytics to learn what people consider useful or interesting and focus on expanding on those topics. I dialing down topics that get little response because I respect the time of my followers. Does your company restricts or encourages you with your twitter usage? The company policy is, use your own judgment. Public social networks have had a tremendously positive impact on open source, primarily because open source is both public and social. That makes Twitter pretty central to my position. We often discover new contributors (and vice-versa) on Twitter. We also use it as a 140 character limit mailing list at times (which, trust me, is a relief from the essays that are often found on real mailing lists). Simply put, I couldn’t do my job (in this day and age) without Twitter (or something like it). Work What’s your daily development setup? A tabbed terminal with lots of Vim and a web browser. Nearly all the work I do happens in these environments. Since I’ve been heavily involved in AsciiDoc and writing content in general, many of my Vim sessions have an AsciiDoc document queued up. I do all my Ruby development in Vim. I rely on syntax highlighting and my own intuition as my Ruby IDE. If you saw the number of times I split the window, it would frighten you. Don’t mimic what I do, it’s probably terribly inefficient, but somehow it works for me. When I need to do some Java hacking, I absolutely must fire up an IDE. Editing Java in Vim (without any additional plugins) is just a waste of time. I’m most comfortable in Eclipse because that’s what I used first in my career. However, I’m been firing up IntelliJ IDEA more often lately and I do like Netbeans on occasion. When I have to edit XML in the project, I flip back to Vim because copy-paste is much more efficient! The development tools in the browser are a life and time saver when editing CSS. I like to work out the CSS rules I want in a live session, then transfer them to the stylesheet in the project. It all begins with “Inspect element”. Which is the tool providing most productivity to your work? Vim. I’ve used Vim every single day I’ve been at a computer for the last decade. I couldn’t imagine life without it. Vim is my hammer. Your prefered way of interacting with co-workers? Primarily async communication, with a few face-to-face meetups a year. The async communication is a mix of mailinglists, social networks, emails and (on and off) IRC. Most personal emails with my close colleagues have been replaced by Google+ and Twitter private messages, since we all have too much email. You’d be amazed how much more effective those private messages are. Something certainly worth noting. We usually get face time at conferences like Devoxx and JavaOne. This time is so important because it’s when we form the impression of the person behind the screenname. After you’ve met someone, and heard their voice, you’ll never read an email from them the same again. You’ll hear it coming from them, with their voice and expressions. Those impression, and the bonds you form when in person, is what make the virtual relationships work. You also discover some other things to talk about besides tech (or your tech in particular). Occasionally, I get put on these teams that like to do phone meetings. First, will someone please kill conference lines? They are horrible and a buzz kill. Besides that, phone calls in a global company simply don’t work. No time is a good time for someone. When we finally do manage to get (most) everyone on the phone, no one knows when to talk (or shut up). It’s a circus. Return me to my async communication. If I do need to be “on the phone”, I prefer Google Hangout (when it works). I’m not exaggerating when I say it’s almost as good as being in person. What’s your favorite way of managing your todo’s? I did a lot of research in this area and decided on an online application named Nirvana. It adheres to David Allen’s GTD method more faithfully than any other one I evaluated. When I’m good about sticking to it, it serves me well. When I’m not so good, I fall back to my two anchors, a text file named WORKLOG and my email inbox. One trick I’ve used for years, which works great for context switching, is maintaining a WORKLOG file in each project that I work on. The tasks in this file aren’t perk pressing, but do remind me of what I want to do next when I have time to work on the project. It’s especially useful when you return to a project after a long break. If you could make a wish for a job at your favorite company, what would that be? I’m at the point now where my ideal job isn’t at someone else’s company, but at my own. One of the main reasons I love open source is the autonomy it grants. I don’t have problems finding ways to create value, but I do sometimes have problems convincing my employer to pursue that value creation. In my ideal job, which I’m now pursuing, I can create value anyway I want, I can judge when I’ve succeeded and when I’ve failed for myself, I can decide when growth is necessary and when it isn’t and I can defend the principles important to me. That’s why my wife and I took the step to create our own business. Our goals are pretty simple: survive, be happy & healthy, create value, work in open source and help clients be wildly successful. Java You’re programming in Java. Why? I’m a strong believer in portability and choice. And I believe the JVM provides us that freedom. The fact it’s one of the most optimized and efficient runtimes is just icing on the cake. I use Java because it’s the default language on the JVM. If another language replaced it as the default, I’d probably use that instead. Java is a means to and end to run and integrate code on the common runtime of the JVM. There are some compelling features that have made Java enjoyable, such as annotations and now lambdas and streams. However, if I have my choice, I prefer other languages, such as Ruby, Groovy and Clojure…​as long as the language runs well on the JVM! What’s least fun with Java? The ceremony and verbosity. It’s too much to type. I like code that can get a lot done in a little amount of space, but still be readable and intuitive. Java requires a lot of space. Java is also missing some really key features from the standard library that you find in most other languages. A good example is a single function that can read all the content from a file or URL. It’s a simple concept. It should have a simple function. Not so with Java. Also, getters and setters are dumb. If you could change one thing with Java, what would that be? Less ceremony for imports. I know, that’s not the first thing that comes to a lot of people’s minds…​that is unless you’ve done a lot of work in a dynamic language. One of the biggest differences between Java and dynamic languages not often mentioned is the number of types in the default language set and the number of import statements you need to get more. It may not seem such a big deal, especially since IDEs help manage the import statements, but you’d be surprised how much they still slow you down, and outright paralyze development without the help of an IDE. In Ruby (and to some extent, Groovy), you can write most simple programs without a single import (require) statement. That means you can just keep plugging away. Ruby also let’s you import a whole library so it’s accessible to all the files in your application with a single statement (a RubyGem). In Java, you have to import every single type you use (or at least every package that contains them) in every single file. That’s a huge number of extra lines to manage. My hope is that this improvement comes along with Java modularity. You can import a module into your application, then use the types from it anywhere. That would be game changing for me. Combined with the language improvements in Java 8, my efficiency in Java just might be able to catch up to my efficiency in Ruby. What’s your personal favorite in dynamic languages? Ruby. I’ve now written more code in Ruby than in any other programming language (https://www.openhub.net/accounts/mojavelinux/languages). (I’ve also explored the Ruby and Java interop extensively). I can attest that Ruby is very natural, just as the language designer intended it to be. I’m also a fan of Groovy and Clojure. I like Groovy for the reasons I like Ruby, with the added benefit that it integrates seamlessly with Java. Clojure is my “challenge yourself language”. I wouldn’t say it feels natural to me yet, but it pushes me to write better code. It’s true what they say about a LISP. It does expand your thinking. Which programming technique has moved you forwards most and why? Functional programming, no doubt. This is a popular response, but for good reason. It’s more than just a trend. From my experience working with Java EE, Seam and CDI, I believe I’m qualified to say that managing state in a shared context is difficult in the best cases and usually fallible or impossible. As isolated processes become increasingly rare, we must change our approach to development. Functional programming gives us the necessary tools. Higher order functions allow us to compose logic without having to rely on class hierarchy and the temptation of relying on shared state. Persistent collections and no side effects let’s us write code that is thread safe by default and, better yet, prepared to be optimized for multi-core and even distributed. Don’t take my word for it, though. Just listen to a few of Rich Hickey’s talks, then grab a book or tutorial on Clojure and start studying it. Your mind will convince you. What was the biggest project you’ve ever worked on? It was a J2EE web application that facilitated mortgage lending and automated appraisal services. The application was written in a somewhat obscure component-based framework that predated JSF that talked to an EJB2 backend and webMethods services. It had to be loaded on the bootclasspath of Weblogic in order for it to run for reasons I’ll never understand. In my time working there, the test suite never completed successfully and no one could figure out how to fix the behemoth. Debugging was a nightmare. It wasn’t pretty. Let’s just say I appreciated the need for a lightweight framework like Spring and changed my career path once I lost the stomach to work on this system. The nice part about that job was that I got experience using the XP development methodology (story cards, pair programming, continuously failing integration, etc). It’s probably the only reason the application was staying afloat and moving forward at all. Which was the worst programming mistake you did? Not documenting (and not testing). I’m always getting on myself for not documenting. We think of programming mistakes as logic or syntax errors, but the worst crimes we can commit are not passing on knowledge and stability. It’s like spreading land mines around a property, forgetting about them and then turning the property into a park. The mistakes are going to be made by the next person who isn’t aware of all those things you need to know to keep the system running securely. I’ll end with a variation on the most popular Tweet at this year’s OSCON to help encourage you to be a more disciplined programmer. Always [write documentation] as if the [person] who ends up maintaining your code will be a violent psychopath who knows where you live. — John Woods (source)Reference: The Heroes of Java: Dan Allen from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

Beginner’s Guide To Hazelcast Part 1

Introduction I am going to be doing a series on Hazelcast. I learned about this product from Twitter. They decided to follow me and after some research into what they do, I decided to follow them. I tweeted that Hazelcast would be a great backbone for a distributed password cracker. This got some interest and I decided to go make one. A vice president of Hazelcast started corresponding with me and we decided that while a cracker was a good project, the community (and me) would benefit from having a series of posts for beginners. I have been getting a lot of good information in the book preview The Book of Hazelcast found on www.hazelcast.com.   What is Hazelcast? Hazelcast is a distributed, in-memory database. There are projects all over the world using Hazelcast. The code is open source under the Apache License 2.0. Features There are a lot of features already built into Hazelcast. Here are some of them:Auto discovery of nodes on a network High Availablity In memory backups The ability to cache data Distributed thread poolsDistributed Executor ServiceThe ability to have data in different partitions. The ability to persist data asynchronously or synchronously. Transactions SSL support Structures to store data:IList IMap MultiMap ISetStructures for communication among different processesIQueue ITopicAtomic OperationsIAtomicLongId GenerationIdGeneratorLockingISemaphore ICondition ILock ICountDownLatchWorking with Hazelcast Just playing around with Hazelcast and reading has taught me to assume these things.The data will be stored as an array of bytes. (This is not an assumption, I got this directly from the book) The data will go over the network. The data is remote. If the data is not in memory, it doesn’t exist.Let me explain these assumptions: The data will be stored as an array of bytes I got this information from The Book of Hazelcast so it is really not an assumption. This is important because not only is the data stored that way, so is the key. This makes life very interesting if one uses something other than a primitive or a String as a key. The developer of hash() and equals() must think about it in terms of the key as an array of bytes instead of as a class. The data will go over the network This is a distributed database and so parts of the data will be stored in other nodes. There are also backups and caching that happen too. There are techniques and settings to reduce transferring data over the network but if one wants high availability, backups must be done. The data is remote This is a distributed database and so parts of the database will be stored on other nodes. I put in this assumption not to resign to the fact that the data is remote but to motivate designs that make sure operations are preformed where most of the data is located. If the developer is skilled enough, this can be kept to a minimum. If the data is not in memory, it doesn’t exist Do not forget that this is an in-memory database. If it doesn’t get loaded into memory, the database will not know that data is stored somewhere else. This database doesn’t persist data to bring it up later. It persists because the data is important. There is no bringing it back from disk once it is out of memory like a conventional database (MySQL) would do. Data Storage Java developers will be happy to know that Hazelcast’s data storage containers except one are extensions of the java.util.Collections interfaces. For example, an IList follows the same method contracts as java.util.List. Here is a list of the different data storage types:IList – This keeps a number of objects in the order they were put in IQueue – This follows BlockingQueue and can be used as alternative to a Message Queue in JMS. This can be persisted via a QueueStore IMap – This extends ConcurrentMap. It can also be persisted by a MapStore. It also has a number of other features that I will talk about in another post. ISet – The keeps a set of unique elements where order is not guaranteed. MultiMap – This does not follow a typical map as there can be multiple values per key.Example Setup For all the features that Hazelcast contains, the initial setup steps are really easy.Download the Hazelcast zip file at www.hazelcast.org and extract contents. Add the jar files found in the lib directory into one’s classpath. Create a file named hazelcast.xml and put the following into the file <?xml version="1.0" encoding="UTF-8"?> <hazelcast xsi:schemaLocation ="http://www.hazelcast.com/schema/config http://www.hazelcast.com/schema/config/hazelcast-config-3.0.xsd " xmlns ="http://www.hazelcast.com/schema/config " xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance">     <network>         <join><multicast enabled="true"/></join>     </network>          <map name="a"></map> </hazelcast> Hazelcast looks in a few places for a configuration file:The path defined by the property hazelcast.config hazelcast.xml in the classpath if classpath is included in the hazelcast.config The working directory If all else fails, hazelcast-default.xml is loaded witch is in the hazelcast.jar. If one dose not want to deal with a configuration file at all, the configuration can be done programmatically.The configuration example here defines multicast for joining together. It also defines the IMap “a.” A Warning About Configuration Hazelcast does not copy configurations to each node. So if one wants to be able to share a data structure, it needs to be defined in every node exactly the same. Code This code brings up two nodes and places values in instance’s IMap using an IdGenerator to generate keys and reads the data from instance2. package hazelcastsimpleapp;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IdGenerator; import java.util.Map;/** * * @author Daryl */ public class HazelcastSimpleApp {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); Map map = instance.getMap("a"); IdGenerator gen = instance.getIdGenerator("gen"); for(int i = 0; i < 10; i++) { map.put(gen.newId(), "stuff " + i); } Map map2 = instance2.getMap("a"); for(Map.Entry entry: map2.entrySet()) { System.out.printf("entry: %d; %s\n", entry.getKey(), entry.getValue()); } System.exit(0); } } Amazingly simple isn’t it! Notice that I didn’t even use the IMap interface when I retrieved an instance of the map. I just used the java.util.Map interface. This isn’t good for using the distributed features of Hazelcast but for this example, it works fine. One can observe the assumptions at work here. The first assumption is storing the information as an array of bytes. Notice the data and keys are serializable. This is important because that is needed to store the data.  The second and third assumptions hold true with the data being being accessed by the instance2 node. The fourth assumption holds true because every value that was put into the “a” map was displayed when read. All of this example can be found at http://darylmathisonblog.googlecode.com/svn/trunk/HazelcastSimpleApp using subversion. The project was made using Netbeans 8.0. Conclusion An quick overview of the numerous features of Hazelcast were reviewed with a simple example showing IMap and IdGenerator. A list of assumptions were discussed that apply when developing in a distributed, in-memory database environment. Resources The Book of Hazelcast. Download from http://www.hazelcast.comReference: Beginner’s Guide To Hazelcast Part 1 from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....

Large Program? Release More Often

I’m working on the release planning chapter for Agile and Lean Program Management: Collaborating Across the Organization. There are many ways to plan releases. But the key? Release often. How often? I suggest once a month. Yes, have a real, honest-to-goodness release once a month. I bet that for some of you, this is counter-intuitive. “We have lots of teams. Lots of people. Our iterations are three weeks long. How can we release once a month?” Okay, release every three weeks. I’m easy.   Look, the more people and teams on your program, the more feedback you need. The more chances you have for getting stuck, being in the death spiral of slowing inertia. What you want is to gain momentum. Large programs magnify this problem. If you want to succeed with a large agile program, you need to see progress, wherever it is. Hopefully, it’s all over the program. But, even if it’s not, you need to see it and get feedback. Waiting for feedback is deadly. Here’s what you do:Shorten all iterations to two weeks or less. You then have a choice to release every two or four weeks. If you have three-week iterations, plan to release every three weeks. Make all features sufficiently small so that they fit into an iteration. This means you learn how to make your stories very small. Yes, you learn how. You learn what a feature set (also known as a theme) is. You learn to break down epics. You learn how to have multiple teams collaborate on one ranked backlog. Your teams start to swarm on features, so the teams complete one feature in one iteration or in flow. The teams integrate all the time. No staged integration.Remember this picture, the potential for release frequency?That’s the release frequency outside your building. I’m talking about your internal releasing right now. You want to release all the time inside your building. You need the feedback, to watch the product grow.   In agile, we’re fond of saying, “If it hurts, do it more often.” That might not be so helpful. Here’s a potential translation:  “Your stuff is too big. Make it smaller.” Make your release planning smaller. Make your stories smaller. Integrate smaller chunks at one time. Move one story across the board at one time. Make your batches smaller for everything. When you make everything smaller (remember Short is Beautiful?), you can go bigger.Reference: Large Program? Release More Often from our JCG partner Johanna Rothman at the Managing Product Development blog....

Continuous Integration is Dead

A few days ago, my article “Why Continuous Integration Doesn’t Work” was published at DevOps.com. Almost the same day I received a few strongly negative critiques on Twitter. Here is my response to the un-asked question: Why the hell shouldn’t continuous integration work, being such a brilliant and popular idea?     Even though I have some experience in this area, I won’t use it as an argument. I’ll try to rely only on logic instead. BTW, my experience includes five years of using Apache Continuum, Hudson, CruiseControl, and Jenkins in over 50 open source and commercial projects. Besides that, a few years ago I created a hosted continuous integration service called fazend.com, renamed to rultor.com in 2013. Currently, I’m also an active user of Travis. How Continuous Integration Should Work The idea is simple and obvious. Every time you make a new commit to the master branch (or /trunk in Subversion), a continuous integration server (or service) attempts to build the entire product. “Build” means compile, unit test, integration test, quality analysis, etc. The result is either “success” or “failure”. If it is a success, we say that “the build is clean”. If it is a failure, we say that “the build is broken”. The build usually gets broken because someone breaks it by commiting new code that turns previously passing unit tests into failing ones. This is the technical side of the problem. It always works. Well, it may have its problems, like hard-coded dependencies, lack of isolation between environments or parallel build collisions, but this article is not about those. If the application is well written and its unit tests are stable, continuous integration is easy. Technically. Let’s see the organizational side. Continuous integration is not only a server that builds, but a management/organizational process that should “work”. Being a process that works means exactly what Jezz Humble said in Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, on page 55: Crucially, if the build fails, the development team stops whatever they are doing and fixes the problem immediately This is what doesn’t work and can’t work. Who Needs This? As we see, continuous integration is about setting the entire development team on pause and fixing the broken build. Let me reiterate. Once the build is broken, everybody should focus on fixing it and making a commit that returns the build to the stable state. Now, my question is — who, in an actively working team, may need this? A product owner, who is interested in launching new features to the market as soon as possible? Or maybe a project manager, who is responsible for the deadlines? Or maybe programmers, who hate to fix someone else’s bugs, especially under pressure. Who likes this continuous integration and who needs it? Nobody. What Happens In Reality? I can tell you. I’ve seen it multiple times. The scenario is always the same. We just start to ignore that continuous integration build status. Either the build is clean or it is broken, and we continue to do what we were doing before. We don’t stop and fix it, as Jezz Humble recommends. Instead, we ignore the information that’s coming from the continuous integration server. Eventually, maybe tomorrow or on Monday, we’ll try to find some spare time and will try to fix the build. Only because we don’t like that red button on the dashboard and want to turn it into a green one. What About Discipline? Yes, there is another side of this coin. We can try to enforce discipline in the team. We can make it a strict rule, that our build is always clean and whoever breaks it gets some sort of a punishment. Try doing this and you will get a fear driven development. Programmers will be afraid of committing anything to the repository because they will know that if they cause a build failure they will have to apologize, at least. A strict discipline (which I’m a big fan of) in this case only makes the situation worse. The entire development process slows down and programmers keep their code to themselves for as long as possible, to avoid possibly broken builds. When it’s time to commit, their changes are so massive that merging becomes very difficult and sometimes impossible. As a result you get a lot of throw-away code, written by someone but never committed to master, because of that fear factor. OK, What Is The Solution? I wrote about it before; it is called “read-only master branch”. It is simple — prohibit anyone from merging anything into master and create a script that anyone can call. The script will merge, test, and commit. The script will not make any exceptions. If any branch breaks at even one unit test, the entire branch will be rejected. In other words: raise the red flag before the code gets into master. This solves all problems. First, the build is always clean. We simply can’t break it because nobody can commit unless his code keeps the build clean. Second, there is no fear of breaking anything. Simply because you technically can’t do it. All you can do is get a negative response from a merging script. Then you fix your errors and tell the script to try again. Nobody sees these attempts, and you don’t need to apologize. Fear factor is gone. BTW, try to use rultor.com to enforce this “read-only master branch” principle in your project. Related Posts You may also find these posts interesting:Master Branch Must Be Read-Only Project Lifecycle in Teamed.io 10 Hosted Continuous Integration Services for a Private Repository Why Monetary Awards Don’t Work? Remote Programming in Teamed.ioReference: Continuous Integration is Dead from our JCG partner Yegor Bugayenko at the About Programming blog....

Use Byteman in JBoss Fuse / Fabric8 / Karaf

Have you ever found yourself in the process of try to understand how come something very simple is not working? You are writing code in any well known context and for whatever reason it’s not working. And you trust your platform, so you carefully read all the logs that you have. And still you have no clue why something is not behaving like expected. Usually, what I do next, if I am lucky enough to be working on an Open Source project, is start reading the code. That many times works; but almost always you haven’t written that code; and you don’t know the product that well. So, yeah, you see which variable are in the context. You have no clue about their possible values and what’s worse you have no idea where or even worse, when, those values were created. At this point, what I usually do is to connect with a debugger. I will never remember the JVM parameters a java process needs to allow debugging, but I know that I have those written somewhere. And modern IDEs suggest me those, so it’s not a big pain connecting remotely to a complex application server. Okay, we are connected. We can place a breakpoint not far from the section we consider important and step trough the code. Eventually adding more brakpoint. The IDE variables view allows us to see the values of the variables in contexts. We can even browse the whole object tree and invoke snippet of code, useful in case the plain memory state of an object doesn’t really gives the precise information that we need(imagine you want to format a Date or filter a collection). We have all the instruments but… this is a slow process. Each time I stop at a specific breakpoint I have to manually browse the variables. I know, we can improve the situation with watched variables, that stick on top of the overview window and give you a quick look at what you have already identified as important. But I personally find that watches makes sense only if you have a very small set of variables: since they all share the same namespace, you end up with many values unset that just distract the eye, when you are not in a scope that sees those variables. I have recently learnt a trick to improve these workflows that I want to share with you in case you don’t know it yet: IntelliJ and, with a smart trick even Eclipse, allow you to add print statements when you pass through a breakpoint. If you combine this with preventing the breakpoint to pause, you have a nice way to augment the code you are debugging with log invocations. For IntelliJ check here: http://www.jetbrains.com/idea/webhelp/enabling-disabling-and-removing-breakpoints.htmlWhile instead for Eclipse, check this trick: http://moi.vonos.net/2013/10/adhoc-logging/ or let me know if there is a cleaner or newer way to reach the same result. The trick above works. But it’s main drawback is that you are adding a local configuration to your workspace. You cannot share this easily with someone else. And you might want to re-use your workspace for some other session and seeing all those log entries or breakpoints can distract you. So while looking for something external respect my IDE, I have decided to give Byteman a try. Byteman actually offers much more than what I needed this time and that’s probably the main reason I have decided to understand if I could use it with Fabric8. A quick recap of what Byteman does taken directly from its documentation: Byteman is a bytecode manipulation tool which makes it simple to change the operation of Java applications either at load time or while the application is running. It works without the need to rewrite or recompile the original program. Offers:tracing execution of specific code paths and displaying application or JVM state subverting normal execution by changing state, making unscheduled method calls or forcing an unexpected return or throw orchestrating the timing of activities performed by independent application threads monitoring and gathering statistics summarising application and JVM operationIn my specific case I am going to use the first of those listed behaviors, but you can easily guess that all the other aspects might become handy at somepoint:add some logic to prevent a NullPointerException shortcircuit some logic because you are hitting a bug that is not in your code base but you still want to see what happens if that bug wasn’t there anything else you can imagine…Start using Byteman is normally particularly easy. You are not even forced to start your jvm with specific instruction. You can just attach to an already running process! This works most of the time but unluckily not on Karaf with default configuration, since OSGi implication. But no worries, the functionality is just a simple configuration editing far. You have to edit the file : $KARAF_HOME/etc/config.properties and add this 2 packages to the proprerty org.osgi.framework.bootdelegation: org.jboss.byteman.rule,org.jboss.byteman.rule.exception That property is used to instruct the osgi framework to provide the classes in those packages from the parent Classloader. See http://felix.apache.org/site/apache-felix-framework-configuration-properties.html In this way, you will avoid ClassCastException raised when your Byteman rules are triggered. That’s pretty much all the extra work we needed to use Byteman on Fuse. Here a practical example of my interaction with the platform: # assume you have modified Fabric8's config.properties and started it and that you are using fabric8-karaf-1.2.0-SNAPSHOT# find your Fabric8 process id $ ps aux | grep karaf | grep -v grep | cut -d ' ' -f3 5200# navigate to the folder where you have extracted Byteman cd /data/software/redhat/utils/byteman/byteman-download- # export Byteman env variable: export BYTEMAN_HOME=$(pwd) cd bin/ # attach Byteman to Fabric8 process, no output expected unless you enable those verbose flags sh bminstall.sh 5200 # add this flags if you have any kind of problem and what to see what's going on: -Dorg.jboss.byteman.debug -Dorg.jboss.byteman.verbose # install our Byteman custom rules $ sh bmsubmit.sh ~/Desktop/RBAC_Logging.btm install rule RBAC HanldeInvoke install rule RBAC RequiredRoles install rule RBAC CanBypass install rule RBAC UserHasRole # invoke some operation on Fabric8 to trigger our rules: $ curl -u admin:admin 'http://localhost:8181/jolokia/exec/io.fabric8:type=Fabric/containersForVersion(java.lang.String)/1.0' {"timestamp":1412689553,"status":200,"request":{"operation...... very long response}# and now check your Fabric8 shell: OBJECT: io.fabric8:type=Fabric METHOD: containersForVersion ARGS: [1.0] CANBYPASS: false REQUIRED ROLES: [viewer, admin] CURRENT_USER_HAS_ROLE(viewer): true Where my Byteman rules look like: RULE RBAC HanldeInvoke CLASS org.apache.karaf.management.KarafMBeanServerGuard METHOD handleInvoke(ObjectName, String, Object[], String[]) AT ENTRY IF TRUE DO traceln(" OBJECT: " + $objectName + " METHOD: " + $operationName + " ARGS: " + java.util.Arrays.toString($params) ); ENDRULERULE RBAC RequiredRoles CLASS org.apache.karaf.management.KarafMBeanServerGuard METHOD getRequiredRoles(ObjectName, String, Object[], String[]) AT EXIT IF TRUE DO traceln(" REQUIRED ROLES: " + $! ); ENDRULERULE RBAC CanBypass CLASS org.apache.karaf.management.KarafMBeanServerGuard METHOD canBypassRBAC(ObjectName) AT EXIT IF TRUE DO traceln(" CANBYPASS: " + $! ); ENDRULERULE RBAC UserHasRole CLASS org.apache.karaf.management.KarafMBeanServerGuard METHOD currentUserHasRole(String) AT EXIT IF TRUE DO traceln(" CURRENT_USER_HAS_ROLE(" + $requestedRole + "): " + $! ); ENDRULE Obviously ths was just a short example of what Byteman can do for you. I’d invite you to read the project documentation since you might discover nice constructs that could allow you to write easier rules or to refine them to really trigger only when it’s relevant for you (if in my example you see some noise in the output, you probably have an Hawtio instance open that is doing it’s polling thus triggering some of our installed rules). A special thank you goes to Andrew Dinn that explained me how Byteman work and the reason of my initial failures. The screencast is less than optimal due to my errors but you clearly see the added noise since I had an Hawt.io instance invoking protected JMX operation!Reference: Use Byteman in JBoss Fuse / Fabric8 / Karaf from our JCG partner Paolo Antinori at the Someday Never Comes blog....

Conceptual Model vs Graph Model

We’ve started running some sessions on graph modelling in London and during the first session it was pointed out that the process I’d described was very similar to that when modelling for a relational database. I thought I better do some reading on the way relational models are derived and I came across an excellent video by Joe Maguire titled ‘Data Modelers Still Have Jobs: Adjusting For the NoSQL Environment‘ Joe starts off by showing the following ‘big picture framework’ which describes the steps involved in coming up with a relational model:    A couple of slides later he points out that we often blur the lines between the different stages and end up designing a model which contains a lot of implementation details:If, on the other hand, we compare a conceptual model with a graph model this is less of an issue as the two models map quite closely:Entities -> Nodes / Labels Attributes -> Properties Relationships -> Relationships Identifiers -> Unique ConstraintsUnique Constraints don’t quite capture everything that Identifiers do since it’s possible to create a node of a specific label without specifying the property which is uniquely constrained. Other than that though each concept matches one for one. We often say that graphs are white board friendly by which we mean that that the model you sketch on a white board is the same as that stored in the database. For example, consider the following sketch of people and their interactions with various books:If we were to translate that into a write query using Neo4j’s cypher query language it would look like this: CREATE (ian:Person {name: "Ian"}) CREATE (alan:Person {name: "Alan"}) CREATE (gg:Person:Author {name: "Graham Greene"}) CREATE (jlc:Person:Author {name: "John Le Carre"})   CREATE (omih:Book {name: "Our Man in Havana"}) CREATE (ttsp:Book {name: "Tinker Tailor, Soldier, Spy"})   CREATE (gg)-[:WROTE]->(omih) CREATE (jlc)-[:WROTE]->(ttsp) CREATE (ian)-[:PURCHASED {date: "05-02-2011"}]->(ttsp) CREATE (ian)-[:PURCHASED {date: "08-09-2011"}]->(omih) CREATE (alan)-[:PURCHASED {date: "05-07-2014"}]->(ttsp) There are a few extra brackets and the ‘CREATE’ key word but we haven’t lost any of the fidelity of the domain and in my experience a non technical / commercial person would be able to understand the query. By contrast this article shows the steps we might take from a conceptual model describing employees, departments and unions to the eventual relational model. If you don’t have the time to read through that, we start with this initial model……and by the time we’ve got to a model that can be stored in our relational database:You’ll notice we’ve lost the relationship types and they’ve been replaced by 4 foreign keys that allow us to join the different tables/sets together. In a graph model we’d have been able to stay much closer to the conceptual model and therefore closer to the language of the business. I’m still exploring the world of data modelling and next up for me is to read Joe’s ‘Mastering Data Modeling‘ book. I’m also curious how normal forms and data redundancy apply to graphs so I’ll be looking into that as well. Thoughts welcome, as usual!Reference: Conceptual Model vs Graph Model from our JCG partner Mark Needham at the Mark Needham Blog blog....

Property-based testing with ScalaCheck – custom generators

One of the core data structures provided by Hazelcast is IMap<K, V> extending java.util.concurrent.ConcurrentMap – which is basically a distributed map, often used as cache. You can configure such map to use custom MapLoader<K, V> – piece of Java code that will be asked every time you try to .get()something from that map (by key) which is not yet there. This is especially useful when you use IMap as a distributed in-memory cache – if client code asks for something that wasn’t cached yet, Hazelcast will transparently execute your MapLoader.load(key):         public interface MapLoader<K, V> { V load(K key); Map<K, V> loadAll(Collection<K> keys); Set<K> loadAllKeys(); }The remaining two methods are used during startup to optionally warm-up cache by loading pre-defined set of keys. Your custom MapLoader can reach out to (No)SQL database, web-service, file-system, you name it. Working with such a cache is much more convenient because you don’t have to implement tedious “if not in cache load and put in cache” cycle. Moreover, MapLoader has a fantastic feature – if many clients are asking at the same time for the same key (from different threads, or even different cluster members – thus machines), MapLoader is executed only once. This significantly decreases load on external dependencies, without introducing any complexity. In essence IMap with MapLoader is similar to LoadingCache found in Guava – but distributed. However with great power comes great frustration, especially when you don’t understand the peculiarities of API and inherent complexity of a distributed system. First let’s see how to configure custom MapLoader. You can use hazelcast.xml for that (<map-store/> element), but you then have no control over life-cycle of your loader (e.g. you can’t use Spring bean). A better idea is to configure Hazelcast directly from code and pass an instance of MapLoader: class HazelcastTest extends Specification { public static final int ANY_KEY = 42 public static final String ANY_VALUE = "Forty two"def 'should use custom loader'() { given: MapLoader loaderMock = Mock() loaderMock.load(ANY_KEY) >> ANY_VALUE def hz = build(loaderMock) IMap<Integer, String> emptyCache = hz.getMap("cache")when: def value = emptyCache.get(ANY_KEY)then: value == ANY_VALUEcleanup: hz?.shutdown() }Notice how we obtain an empty map, but when asked for ANY_KEY, we get ANY_VALUE in return. This is not a surprise, this is what our loaderMock was expected to do. I left Hazelcast configuration: def HazelcastInstance build(MapLoader<Integer, String> loader) { final Config config = new Config("Cluster") final MapConfig mapConfig = config.getMapConfig("default") final MapStoreConfig mapStoreConfig = new MapStoreConfig() mapStoreConfig.factoryImplementation = {name, props -> loader } as MapStoreFactory mapConfig.mapStoreConfig = mapStoreConfig return Hazelcast.getOrCreateHazelcastInstance(config) }Any IMap (identified by name) can have a different configuration. However special "default" map specifies default configuration for all maps. Let’s play a bit with custom loaders and see how they behave when MapLoader returns null or throws an exception: def 'should return null when custom loader returns it'() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> cache = hz.getMap("cache")when: def value = cache.get(ANY_KEY)then: value == null !cache.containsKey(ANY_KEY)cleanup: hz?.shutdown() }public static final String SOME_ERR_MSG = "Don't panic!"def 'should propagate exceptions from loader'() { given: MapLoader loaderMock = Mock() loaderMock.load(ANY_KEY) >> {throw new UnsupportedOperationException(SOME_ERR_MSG)} def hz = build(loaderMock) IMap<Integer, String> cache = hz.getMap("cache")when: cache.get(ANY_KEY)then: UnsupportedOperationException e = thrown() e.message.contains(SOME_ERR_MSG)cleanup: hz?.shutdown() }MapLoader is executed in a separate thread So far nothing surprising. The first trap you might encounter is how threads interact here. MapLoader is never executed from client thread, always from a separate thread pool: def 'loader works in a different thread'() { given: MapLoader loader = Mock() loader.load(ANY_KEY) >> {key -> "$key: ${Thread.currentThread().name}"} def hz = build(loader) IMap<Integer, String> cache = hz.getMap("cache")when: def value = cache.get(ANY_KEY)then: value != "$ANY_KEY: ${Thread.currentThread().name}"cleanup: hz?.shutdown() }This test passes because current thread is "main" while loading occurs from within something like"hz.Cluster.partition-operation.thread-10". This is an important observation and is actually quite obvious if you remember that when many threads try to access the same absent key, loader is called only once. But more needs to be explained here. Almost every operation on IMap is encapsulated into one of operation objects (see also: Command pattern). This operation is later dispatched to one or all cluster members and executed remotely in a separate thread pool, or even on a different machine. Thus, don’t expect loading to occur in the same thread, or even same JVM/server(!) This leads to an interesting situation where you request given key on one machine, but actual loading happens on the other. Or even more epic – machines A, B and C request given key whereas machine D physically loads value for that key. The decision which machine is responsible for loading is made based on consistent hashing algorithm. One final remark – of course you can customize the size of thread pools running these operations, see Advanced Configuration Properties. IMap.remove() calls MapLoader This one is totally surprising and definitely to be expected once you think about it: def 'IMap.remove() on non-existing key still calls loader (!)'() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> emptyCache = hz.getMap("cache")when: emptyCache.remove(ANY_KEY)then: 1 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }Look carefully! All we do is removing absent key from a map. Nothing else. Yet, loaderMock.load() was executed. This is a problem especially when your custom loader is particularly slow or expensive. Why was it executed here? Look up the API of `java.util.Map#remove(): V remove(Object key) [...] Returns the value to which this map previously associated the key, or null if the map contained no mapping for the key. Maybe it’s controversial but one might argue that Hazelcast is doing the right thing. If you consider our map withMapLoader attached as sort of like a view to external storage, it makes sense. When removing absent key, Hazelcast actually asks our MapLoader: what could have been a previous value? It pretends as if the map contained every single value returned from MapLoader, but loaded lazily. This is not a bug since there is a special method IMap.delete() that works just like remove(), but doesn’t load “previous” value: @Issue("https://github.com/hazelcast/hazelcast/issues/3178") def "IMap.delete() doesn't call loader"() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> cache = hz.getMap("cache")when: cache.delete(ANY_KEY)then: 0 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }Actually, there was a bug: IMap.delete() should not call MapLoader.load(), fixed in 3.2.6 and 3.3. If you haven’t upgraded yet, even IMap.delete() will go to MapLoader. If you think IMap.remove() is surprising, check out howput() works! IMap.put() calls MapLoader If you thought remove() loading value first is suspicious, what about explicit put() loading a value for a given key first? After all, we are explicitly putting something into a map by key, why Hazelcast loads this value first via MapLoader? def 'IMap.put() on non-existing key still calls loader (!)'() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> emptyCache = hz.getMap("cache")when: emptyCache.put(ANY_KEY, ANY_VALUE)then: 1 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }Again, let’s restore to java.util.Map.put() JavaDoc: V put(K key, V value) [...] Returns: the previous value associated with key, or null if there was no mapping for key. Hazelcast pretends that IMap is just a lazy view over some external source, so when we put() something into an IMapthat wasn’t there before, it first loads the “previous” value so that it can return it. Again this is a big issue whenMapLoader is slow or expensive – if we can explicitly put something into the map, why load it first? Luckily there is a straightforward workaround, putTransient(): def "IMap.putTransient() doesn't call loader"() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> cache = hz.getMap("cache")when: cache.putTransient(ANY_KEY, ANY_VALUE, 1, TimeUnit.HOURS)then: 0 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }One caveat is that you have to provide TTL explicitly, rather then relying on configured IMap defaults. But this also means you can assign arbitrary TTL to every map entry, not only globally to whole map – useful. IMap.containsKey() involves MapLoader, can be slow or block Remember our analogy: IMap with backing MapLoader behaves like a view over external source of data. That’s why it shouldn’t be a surprise that containsKey() on an empty map will call MapLoader: def "IMap.containsKey() calls loader"() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> emptyMap = hz.getMap("cache")when: emptyMap.containsKey(ANY_KEY)then: 1 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }Every time we ask for a key that’s not present in a map, Hazelcast will ask MapLoader. Again, this is not an issue as long as your loader is fast, side-effect free and reliable. If this is not the case, this will kill you: def "IMap.get() after IMap.containsKey() calls loader twice"() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> cache = hz.getMap("cache")when: cache.containsKey(ANY_KEY) cache.get(ANY_KEY)then: 2 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }Despite containsKey() calling MapLoader, it doesn’t “cache” loaded value to use it later. That’s why containsKey()followed by get() calls MapLoader two times, quite wasteful. Luckily if you call containsKey() on existing key, it runs almost immediately, although most likely will require network hop. What is not so fortunate is the behaviour of keySet(),values(), entrySet() and few other methods before version 3.3 of Hazelcast. These would all block in case any key is being loaded at a time. So if you have a map with thousands of keys and you ask for keySet(), one slowMapLoader.load() invocation will block whole cluster. This was fortunately fixed in 3.3, so that IMap.keySet(),IMap.values(), etc. do not block, even when some keys are being computed at the moment.As you can see IMap + MapLoader combo is powerful, but also filled with traps. Some of them are dictated by the API, osme by distributed nature of Hazelcast, finally some are implementation specific. Be sure you understand them before implementing loading cache feature.Reference: Property-based testing with ScalaCheck – custom generators from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

4 Foolproof Tips Get You Started With JBoss BRMS 6.0.3

Last week Red Hat released the next version of JBoss BRMS, labeled 6.0.3 and it is available in their Customer Portal for those with a subscription. If you are curious as to what is new in this release, see the release notes and the rest of the documentation online in the Customer Portal.   What we are looking for are some easy ways to get started with this new release and this article has just the things you are looking for. We will give you a foolproof installation to get you started, then show you a complete online web shop project you can experiment with, provide a completed Cool Store project for your evaluation needs and finally provide you with an advanced integration example that ties together JBoss BRMS with  JBoss Data Virtualization. Tip #1This is the first step in your journey, getting up and running without a lot of hassle or wasted time. This starts with the JBoss BRMS Install Demo which gets you started with a clean installation, a user to get started, and the product open in your browser for you to get started designing your rules, events and rule flows.     Tip #2 Now you have an installation, you are looking at the product, but what should you do next? No worries, we have a very nice hands on tutorial for you that shows you how to take your installation from zero to a full blown online web shopping cart application. You will build the rules, event and rule flow that is needed to realize this application. Check this online workshop here:  Tip #3 Maybe you are just looking to evaluate or just play with the product we also have a fully finished Cool Store retail example for you to enjoy.This is the fully completed online web shopping cart application with rules, decision table, event and rule flow that you can also put together yourself (see Tip #2). This project also has links to more information and details around what it is made of and also provides a video walk through. Give it a go and install the Cool Store Demo project to evaluate JBoss BRMS today.   Tip #4 The final step is to examine more advanced use cases like integration cases using JBoss Data Virtualization.It just so happens we have an interesting project, JBoss BRMS & JBoss DV Integration Demo, that is just as easy to install as the previous ones and it demonstrates the value of decision logic being used with diverse data sources in an enterprise. Also be sure to enjoy the extensive video collection that is presented in the projects documentation, they walk you through all the details. We hope these foolproof tips are all you need to get started with JBoss BRMS and the new 6.0.3 release.Reference: 4 Foolproof Tips Get You Started With JBoss BRMS 6.0.3 from our JCG partner Eric Schabell at the Eric Schabell’s blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: