Featured FREE Whitepapers

What's New Here?


Grails chained select – load data on one dropdown box depending on another

This tutorial will show how to create two select boxes where the second select box’s items are affected by the value of the first one. Developers encounters this commonly and it is also called chained select. An example is selecting a country, followed by selecting a state. The choices for states should be based on which country was selected. Sample Output Here is a sample output. We have a select box for category followed by selection for sub-category. Initially, when no items is selected on the first box, the second one is empty.When the user chooses Color cateogory, sub-categories will show different color selection.When the user chooses Shape cateogory, sub-categories will show different shape selection.Test Domain Class There are two domain classes in this example. package asia.grails.test class Category { static hasMany = [subCategories:SubCategory] String name public String toString() { return name } } package asia.grails.test class SubCategory { static belongsTo = Category Category category String name public String toString() { return name } } This is just a simple one to many relationship. Test Data We populate test data via Bootstrap.groovy import asia.grails.test.Category import asia.grails.test.SubCategory class BootStrap { def init = { servletContext -> if ( Category.count() == 0 ) { Category color = new Category(name:'Color').save() new SubCategory(category:color, name:'Red').save() new SubCategory(category:color, name:'Green').save() new SubCategory(category:color, name:'Blue').save() Category shape = new Category(name:'Shape').save() new SubCategory(category:shape, name:'Square').save() new SubCategory(category:shape, name:'Circle').save() Category size = new Category(name:'Size').save() new SubCategory(category:size, name:'Small').save() new SubCategory(category:size, name:'Medium').save() new SubCategory(category:size, name:'Large').save() } } def destroy = { } } We will have 3 categories: Color, Shape, and Size. Chained Select Form We display a form. Controller code is simple: package asia.grails.test class TestController { def form() { } } Content of form.gsp is this. <%@ page import="asia.grails.test.Category" %> <!DOCTYPE html> <html> <head> <meta name="layout" content="main"> <title>Chained Select Test</title> <g:javascript library='jquery' /> </head> <body> <div> <b>Category: </b> <g:select id="category" name="category.id" from="${Category.listOrderByName()}" optionKey="id" noSelection="[null:' ']" onchange="categoryChanged(this.value);" /> </div> <div> <b>Sub-Category: </b> <span id="subContainer"></span> </div> <script> function categoryChanged(categoryId) { <g:remoteFunction controller="test" action="categoryChanged" update="subContainer" params="'categoryId='+categoryId"/> } </script> </body> </html> The second select box is rendered inside the span with id=subContainer. It is updated whenever category is changed (onchange=”categoryChanged(this.value);). The method to render the sub categories is through AJAX call. The remoteFunction tag can be used to invoke a controller action asynchronously. It is also important to include the JQuery library. This is done with this GSP code: <g:javascript library='jquery' /> Since Grails 2.4, the Grails AJAX tags were deprecated. An alternative is to hand-code the javascript method. Here is an alternate implementation of categoryChanged function. <script> function categoryChanged(categoryId) { jQuery.ajax({type:'POST',data:'categoryId='+categoryId, url:'/forum/test/categoryChanged',success:function(data,textStatus){jQuery('#subContainer').html(data);},error:function(XMLHttpRequest,textStatus,errorThrown){}}); } </script> It is longer but still intuitive. Here is the rest of the controller code that includes the method that will render the second select box. package asia.grails.test class TestController { def form() { } def categoryChanged(long categoryId) { Category category = Category.get(categoryId) def subCategories = [] if ( category != null ) { subCategories = SubCategory.findAllByCategory(category, [order:'name']) } render g.select(id:'subCategory', name:'subCategory.id', from:subCategories, optionKey:'id', noSelection:[null:' '] ) } } The controller’s categoryChanged method is invoked by the AJAX function. It returns the select box HTML code and render inside the span with id=subContainer.Reference: Grails chained select – load data on one dropdown box depending on another from our JCG partner Jonathan Tan at the Grails cookbook blog....

Grails render images on the fly in GSP

This tutorial will show how to generate PNG images on the fly and display inside a GSP. This can serve as a basis on how to create a more complex behavior. For example, creating report graphs for display in your applications. Sample Output This tutorial will show how to generate simple shapes based on input size and color. Squares and circles will be supported.        And the images can be combined for display inside a GSP:Generate Images on the fly This is the code to generate a square and circle on separate controller actions. The java.awt api is used for programming the logic, which is a very common package for generating graphics. package asia.grails.test import javax.imageio.ImageIO import java.awt.Color import java.awt.Graphics import java.awt.image.BufferedImage class TestImageController { def square(int size, String color) { BufferedImage buffer = new BufferedImage(size, size, BufferedImage.TYPE_INT_RGB); Graphics g = buffer.createGraphics(); Color gfxColor = new Color(Integer.parseInt(color, 16)); g.setColor(gfxColor); g.fillRect(0,0,size,size); response.setContentType("image/png"); OutputStream os = response.getOutputStream(); ImageIO.write(buffer, "png", os); os.close(); } def circle(int size, String color) { BufferedImage buffer = new BufferedImage(size, size, BufferedImage.TYPE_INT_RGB); Graphics g = buffer.createGraphics(); g.setColor(Color.WHITE); g.fillRect(0,0,size,size); Color gfxColor = new Color(Integer.parseInt(color, 16)); g.setColor(gfxColor); g.fillArc(0, 0, size, size, 0, 369); response.setContentType("image/png"); OutputStream os = response.getOutputStream(); ImageIO.write(buffer, "png", os); os.close(); } } For both actions, a BufferedImage is created that will hold the resulting image. The image buffer is manipulated via the Graphics class. The resulting image binary is streamed to the user via the response.outputStream object. The content type should be set so that the browser will be able to understand that the incoming binary is an image. A square image with 80 pixel sides and cyan color is generated If we access this URL: http://localhost:8080/forum/testImage/square?size=80&color=0ff0ffA circle image with 150 pixel diameter and blue color is generated If we access this URL: http://localhost:8080/forum/testImage/circle?size=150&color=0000ffRendering in GSP To render the images inside GSP is straightforward. Here is a sample Controller and GSP. package asia.grails.test class TestController { def index() {} } index.gsp <%@ page import="asia.grails.forum.DiscussionThread" %> <!DOCTYPE html> <html> <head> <meta name="layout" content="main"> <title>Image Test</title> </head> <body> <g:img dir="testImage" file="square?size=150&color=ff0000"/> <g:img dir="testImage" file="square?size=50&color=00ff00"/> <g:img dir="testImage" file="square?size=75&color=0000ff"/> <g:img dir="testImage" file="circle?size=125&color=00ffff"/> <g:img dir="testImage" file="circle?size=25&color=ff00ff"/> <g:img dir="testImage" file="circle?size=225&color=ffff00"/> </body> </html> The dir is mapped to a controller and file maps to the action plus the request parameters. The result is the image shown below:Reference: Grails render images on the fly in GSP from our JCG partner Jonathan Tan at the Grails cookbook blog....

Microservices in the Enterprise: Friend or Foe?

A micro approach to a macro problem? The microservice hype is everywhere, and although the industry can’t seem to agree on an exact definition, we are repeatedly told that moving away from a monolithic application to a Service-Oriented Architecture (SOA) consisting of small services is the correct way to build and evolve software systems. However, there is currently an absence of traditional ‘Enterprise’ organisations talking about their adoption of microservices. This blog post is a preview to a larger article, which explores the use of microservices in the Enterprise. Interfaces – Good contracts make for good neighbours Whether you are starting a greenfield microservice project or are tasked with deconstructing an existing monolith into services, the first task is to define the boundaries and corresponding Application Programming Interfaces (APIs) of your new components. The suggested granularity of a service in a microservice architecture is finer in comparison with what is typically implemented when using a classical Enterprise Service Oriented Architecture (SOA) approach, but arguably the original intention of SOA was to create cohesive units of reusable business functionality, even if the implementation history tells a different story. A greenfield microservice project often has more flexibility, and the initial design stage can define Domain Driven Design (DDD) inspired bounded contexts with explicit responsibilities and contracts between service provider and consumer (for example, using Consumer Driven Contracts). However, a typical brownfield project must look to create “seams” within the existing applications and implement new (or extracted) services that integrate with the seam interface. The goal is for each service to have high cohesion and loose coupling; the design of the service interface is where the seeds for these principles are sowed. Communication – Synchronous vs asynchronous In practice, we find that many Enterprises will need to offer both synchronous and asynchronous communication in their services. It is worth noting that there is a considerable drive within the industry to move away from the perceived ‘heavyweight’ WS-* communication standards (e.g. WSDL, SOAP, UDDI), even though many of the challenges addressed by these frameworks still exist, such as service discovery, service description and contract negotiation (as articulated very succinctly by Greg Young in a recent presentation at the muCon microservices conference). Middleware – What about the traditional enterprise stalwarts? Although many heavyweight Enterprise Service Bus ESBs can perform some very clever routing, they are frequently deployed as a black box. Jim Webber once joked that ESB should stand for “Egregious Spaghetti Box,” because the operations performed within proprietary ESBs are not transparent, and are often complex. If requirements dictate the use of an ESB (for example, message splitting or policy-based routing), then open source lightweight ESB implementations such as Mule ESB or Fuse ESB should be among the first options you consider. I usually find that a lightweight MQ platform, such as RabbitMQ or ActiveMQ is more suitable because we believe the current trend in SOA communication is towards “dumb pipes and smart endpoints” In addition to removing potential vendor fees and lock-in, other benefits of using lightweight MQ technologies include easier deployment, management, and simplified testing. Deploying microservices – How hard can it be? However you choose to build microservices, it is essential that a continuous integration-style build pipeline be used which includes rigorous automated testing for functional requirements, fault-tolerance, security and performance. The classical SOA approach of manual QA and staged evaluation is arguably no longer appropriate in an economy where ‘speed wins’ and the ability to rapidly innovate and experiment is a competitive advantage (as captured within the Lean Startup movement). Behaviour of your application can become emergent in a microservice-based platform, and although nothing can replace thorough and pervasive monitoring in your production stack, a build pipeline that exercises (or tortures) your components before they are exposed to your customers would appear to be highly beneficial. As I’ve argued in several conference presentations, a good build pipeline should exercise services in the target deployment environment as early in the pipeline as possible. Summary – APIs, lightweight comms, and correct deployment Regardless of whether you subscribe to the microservice hype, it would appear that this style of architecture is gaining traction within practically all software development domains. This article has attempted to provide a primer for understanding key concepts within this growing space, and hopefully reminds readers that many of these problems and solutions have been seen before with classical Enterprise SOA. We would be wise to take care not to reinvent the proverbial ‘service-oriented’ wheel. Please click here for the complete original article, which provides additional information on microservice implementation options on the JVM platform, and also discusses the requirement for Continuous Delivery. A version of this article was originally published in the DZone 2014 Guide to Enterprise Integration. References A full list of references and recommended reading can also be found in the original article and a recent article discussing the business implications of microservices.Reference: Microservices in the Enterprise: Friend or Foe? from our JCG partner Daniel Bryant at the The Tai-Dev Blog blog....

Infinite Loops. Or: Anything that Can Possibly Go Wrong, Does.

A wise man once said: Anything that can possibly go wrong, does – Murphy Some programmers are wise men, thus a wise programmer once said: A good programmer is someone who looks both ways before crossing a one-way street. – Doug Linder In a perfect world, things work as expected and you may think that it is a good idea to keep consuming things until the end. So the following pattern is found all over in every code base: Java for (;;) { // something } C while (1) { // something } BASIC 10 something 20 GOTO 10 Want to see proof? Search github for while(true) and check out the number of matches: https://github.com/search?q=while+true&type=Code Never use possibly infinite loops There is a very interesting discussion in computer science around the topic of the “Halting Problem”. The essence of the halting problem as proved by Alan Turing a long time ago is the fact that it is really undecidable. While humans can quickly assess that the following program will never stop: for (;;) continue; … and that the following program will always stop: for (;;) break; … computers cannot decide on such things, and even very experienced humans might not immediately be able to do so when looking at a more complex algorithm. Learning by doing In jOOQ, we have recently learned about the halting problem the hard way: By doing. Before fixing issue #3696, we worked around a bug (or flaw) in SQL Server’s JDBC driver. The bug resulted in SQLException chains not being reported correctly, e.g. when the following trigger raises several errors: CREATE TRIGGER Employee_Upd_2 ON EMPLOYEE FOR UPDATE AS BEGINRaiserror('Employee_Upd_2 Trigger called...',16,-1) Raiserror('Employee_Upd_2 Trigger called...1',16,-1) Raiserror('Employee_Upd_2 Trigger called...2',16,-1) Raiserror('Employee_Upd_2 Trigger called...3',16,-1) Raiserror('Employee_Upd_2 Trigger called...4',16,-1) Raiserror('Employee_Upd_2 Trigger called...5',16,-1)END GO So, we explicitly consumed those SQLExceptions, such that jOOQ users got the same behaviour for all databases: consumeLoop: for (;;) try { if (!stmt.getMoreResults() && stmt.getUpdateCount() == -1) break consumeLoop; } catch (SQLException e) { previous.setNextException(e); previous = e; } This has worked for most of our customers, as the chain of exceptions thus reported is probably finite, and also probably rather small. Even the trigger example above is not a real-world one, so the number of actual errors reported might be between 1-5. Did I just say … “probably” ? As our initial wise men said: The number might be between 1-5. But it might just as well be 1000. Or 1000000. Or worse, infinite. As in the case of issue #3696, when a customer used jOOQ with SQL Azure. So, in a perfect world, there cannot be an infinite number of SQLException reported, but this isn’t a perfect world and SQL Azure also had a bug (probably still does), which reported the same error again and again, eventually leading to an OutOfMemoryError, as jOOQ created a huge SQLException chain, which is probably better than looping infinitely. At least the exception was easy to detect and work around. If the loop ran infinitely, the server might have been completely blocked for all users of our customer. The fix is now essentially this one: consumeLoop: for (int i = 0; i < 256; i++) try { if (!stmt.getMoreResults() && stmt.getUpdateCount() == -1) break consumeLoop; } catch (SQLException e) { previous.setNextException(e); previous = e; } True to the popular saying: 640 KB ought to be enough for anybody The only exception So as we’ve seen before, this embarassing example shows that anything that can possibly go wrong, does. In the context of possibly ininite loops, beware that this kind of bug will take entire servers down. The Jet Propulsion Laboratory at the California Institute of Technology has made this an essential rule for their coding standards: Rule 3 (loop bounds) All loops shall have a statically determinable upper-bound on the maximum number of loop iterations. It shall be possible for a static compliance checking tool to affirm the existence of the bound. An exception is allowed for the use of a single non-terminating loop per task or thread where requests are received and processed. Such a server loop shall be annotated with the C comment: /* @non-terminating@ */. So, apart from very few exceptions, you should never expose your code to the risk of infinite loops by not providing upper bounds to loop iterations (the same can be said about recursion, btw.) Conclusion Go over your code base today and look for any possible while (true), for (;;), do {} while (true); and other statements. Review those statements closely and see if they can halt – e.g. using break, or throw, or return, or continue (an outer loop). Chances are, that you or someone before you who wrote that code was as naive as we were, believing that… … oh come on, this will never happen Because, you know what happens when you think that nothing will happen.Reference: Infinite Loops. Or: Anything that Can Possibly Go Wrong, Does. from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Java Flight Recorder (JFR)

JFR is a Java profiler which will allow you to investigate the runtime characteristics of your code. Typically you will use a profiler to determine which parts of your code are causing  large amounts of memory allocation or causing excess CPU to be consumed. There are plenty of products out there.  In the past I’ve used YourKit, OptimizeIt, JProfiler, NetBeans and others. Each has its benefits and it is largely a matter of personal preference as to which you choose. My current personal favourite is YourKit. It integrates nicely into IntelliJ has a relatively low overhead and presents its reports well.     The truth is that profiling is a very inexact science and it is often worth looking at more than one profiler to build up a clearer picture of what exactly is going on in your program. To my knowledge most of the profilers rely on the JVMP/JVMTI agents to probe the Java program. A major problem with this is safe points. This means your Java program can only be probed when it is at a safe point. This means that you will get a false picture of what is really going on in your program especially if much of the activity is between safe points. Also all profilers, to a varying degree add overhead.  Profiler overhead will change the characteristics of your program and may cause misleading results from your analysis. Much more information here. Enter JFR.  JRF has been bundled with the JDK since release 7u40. JFR is built with direct access to the JVM. This not only means that there is a very low overhead (claimed to be less than 1% in nearly all cases) but also does not rely on safe points.  Have a look here at an example of how radically different an analysis from YourKit and JFR can look. To run JFR you need to add these switches to your Java command line: -XX:+UnlockCommercialFeatures -XX:+FlightRecorder JFR is located in Java Mission Control (JMC).  To launch JMC just type jmc in your command line and if you have the JDK in your path the JMC console will launch.  You should see your Java program in the left hand pane. Right click on your program and then start flight recording.You will be presented with a dialog box where you can just accept the defaults (sample for a minute) and then your results will be displayed.  It’s worth paying around with the options to find how this will work best for you.  As with all good products this GUI is fairly intuitive. As you can tell from the command line switches it is commercial feature.  I’m not exactly sure what that means but you can read more about that in the documentation here. Also you can run this from the command line, it’s all in the documentation. One problem I did find was when I downloaded the latest Java8 snapshot (at this time 1.8.0_40-ea) I was unable to launch my program and got the following message: /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/ Error: Trying to use 'UnlockCommercialFeatures', but commercial features are not available in this VM. Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. In summary, JFR is a great addition to any developers toolkit and as long as you are using JDK release 7u40 or above it’s certainly worth trying it out on your code. (I encourage you to have a look at a previous post First rule of performance optimisation in conjunction with JFR)Reference: Java Flight Recorder (JFR) from our JCG partner Daniel Shaya at the Rational Java blog....

How to get a 10,000 points StackOverflow reputation

How it all started In spring 2014, I initiated the Hibernate Master Class project, focusing on best practices and well-established usage patterns. I then realized that all my previous Hibernate experience wouldn’t be enough for this task. I needed more than that. Hibernate has a very steep learning curve and tens of new StackOverflow questions are being asked on a daily basis. With so many problems waiting to be solved, I came to realize this was a great opportunity to prove my current skills, while learning some new tricks. On 8th of May 2014, I gave my very first StackOverflow answer. After 253 days, on 16th of January 2015, I managed to get a reputation of over 10,000:StackOveflow facts StackExchange offers a data query tool to analyze anything you can possible think of. Next I’m going to run some queries against my own account and four well renowned users:User Reputation AnswersJon Skeet 743,416 30,812Peter Lawrey 251,229 10,663Tomasz Nurkiewicz 152,139 2,964Lukas Eder 55,208 1077Vlad Mihalcea 10,018 581Accepted answers reputation The accepted answer ratio tells us how much you can count on the OP (question poster) to accept your answers:User Average acceptance ratio Average acceptance reputation [Ratio x 15]Jon Skeet 60.42% 9.06Peter Lawrey 28,90% 4.35Tomasz Nurkiewicz 53,91% 8,08Lukas Eder 46,69% 7.00Vlad Mihalcea 37,36% 5.60The chance of having your answer accepted rarely surpasses the 60% rate, so don’t count too much on this one. Some OP will never accept your answer, even if it’s the right one and it has already generated a high score. Lesson 1: Don’t get upset if your answer was not accepted, and think of your answer as a contribution to our community rather than a gift to the question author. Up-votes reputation Another interesting metric is the answer score graph:The average answer score is a good indicator of your overall answer effectiveness, as viewed by the whole community:User Average Score Average score reputation [Ratio x 10]Jon Skeet 8.16 81.6Peter Lawrey 2.50 25Tomasz Nurkiewicz 4.67 46.7Lukas Eder 4.25 42.5Vlad Mihalcea 0.75 7.5While the answer acceptance is a one time event, up-voting can be a recurring action. A good answer can increase your reputation, long after you’ve posted your solution. Lesson 2: Always strive for getting high quality answers. Even if they don’t get accepted, someone else might find it later and thank you with an up-vote. Bounty hunting reputation I’ve been a bounty hunter from the very beginning and the bounty contribution query proves why I happen to favor featured questions over regular ones:User Bounty Count Total bounty reputation Average bounty reputationJon Skeet 67 8025 119Tomasz Nurkiewicz 2 100 50Peter Lawrey 4 225 56Lukas Eder 2 550 275Vlad Mihalcea 36 2275 63To place a bounty, you have to be willing to deduct your own reputation, so naturally the question is both challenging and rewarding. The featured questions have a dedicated tab, therefore getting much more traction than regular ones, increasing the up-vote chance as well. Lesson 3: Always favor bounty questions over regular ones. Reputation is a means not a goal The reputation alone is just a community contribution indicator and you should probably care more about tag badges instead. The tag badges prove one’s expertise in a certain technology, and it’s the fairest endorsement system currently available in the software industry. If you want to become an expert in a particular area, I strongly recommend you trying to get a gold badge on that topic. The effort of earning 1000 up-votes will get you more than a virtual medal on your StackOverflow account. You will get to improve your problem-solving skills and make a name for yourself in the software community. As I said it before: When you answer a question you are reiterating your knowledge. Sometimes you only have a clue, so you start investigating that path, which not only provides you the right answer but it also allows you to strengthen your skills. It’s like constant rehearsing. Conclusion If you cannot image developing software without the helping hand of the StackOverflow knowledge base, then you should definitely start contributing. In the end, the occasional “Thank you, it works now!” is much more rewarding than even a 10,000 points reputation.Reference: How to get a 10,000 points StackOverflow reputation from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

The New Agile – Size Matters

Coming up on the 15th year of agile, do we understand business better? Remember that agile started in development teams? As the time passes, we feel that what the agile manifesto can be applied also at the product level, and maybe even at the portfolio level. There’s definitely a demand for scaling the process from the business side. Let’s take a couple of examples on how we moved on. Since we’re talking scale, it makes sense to start small. Do you know about A3? It may sound like a great 90s boy band, but in fact it’s a paper size. We use A3 as a canvas, like the one here from Lean.Org:  What’s so special about an A3 size canvas? Believe it or not, It’s really like agile iterations. Iterations are artificial limits in time. There’s nothing real in there, except once we accept working with them, we suddenly have a deadline every two weeks. Our behavior changes because there’s a constraint. A3’s size is another type of constraint. We can write very long documents, include a few presentations, and even some cool looking charts to explain what’s so great about the next year. Or, we can use the constraint to filter all the buzz out, and create a succinct description that fits into the page’s cells. We can write those in, or we can fill the space with sticky notes. What matters is that we can’t overflow. If we do, we need to take something out. Constraints such as this help us focus, and weed out the trash, leaving us with common basis and (hopefully) understanding. A3’s can be used for anything, and the concept of introducing constraints can be applied anywhere (like, say backlogs. Or WIP). Here’s another example of an A3 sheet, this time for products, the product canvas by Roman Pichler:This time the sheet is used for describing product information. And we can go on with ideas on how to use A3 for different purposes. A3 is not the only old-new idea. Stay tuned.Reference: The New Agile – Size Matters from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Job Security, Career Stability, and Employability For Startups

I was recently asked to answer a question on Quora about startups and stability, and as I read some of the other replies I noticed a trend. The question was basically “Would joining a startup be a mistake for someone with the goals of stability and career progression?”. The questioner then defined stability as being able to support a family and have nice things (financial stability). The answers ranged from a flat-out “Yes” (i.e. “it’s a mistake“) to “startups provide no stability/career progression“, while another pointed out that most startups fail. The responses were familiar, and similar to objections I’ve received when pitching startups to software engineers over the past fifteen plus years. Before answering, I considered the many I know who spent most of their careers at startups and small companies in comparison to the people who worked for larger shops. Have the ones that stuck with startups achieved less (or more) stability and/or career progression? Stability vs Employability Let’s consider Candidate A who has worked for ten years at one large company, most would say that shows job and career stability. After that length of time, we might assume (or hope for) some level of financial stability and at least a small increase in responsibility that could classify as career progression. When presented Candidate B with experience at five startups over a ten year span, most conclude this demonstrates career instability or even “job hopping”. Without seeing job titles or any duties and accomplishments, it would be difficult to make any guesses about career progression, but many would assume that a series of relatively short stints might not allow for much forward movement. Candidate A clearly has more career stability using traditional measures. However, Candidate B’s experience, at least in the tech world, is the somewhat new normal. Job security and career stability (marked by few job changes) is what professionals may have strived for historically, but now one could argue that employability is a much more important concept and goal to focus on. Today, Candidate A’s company announced layoffs and Candidate B’s startup ran out of money. Who lands a job first? Who is more employable? Startups Fail… But They’re (almost) Always Around When job seekers voice concern about the stability of some software startup I’m pitching, I may cede that most startups will fail and the conversation may end there. I might even throw in a “Startups are risky“. These candidates are more concerned about job stability (the keeping of one job) than career stability (the ability to consistently have a job). The fear is that a company will fail, and the candidate would then be a job seeker all over again with some frictional unemployment and the possibility of worse. Given the failure rate of startups, the fear of a company closing is rational. The fear of any sustained unemployment, at least for many startup veterans, probably is not. Anecdotally, most of the people I know who gravitated towards small/new firms had little or no unemployment, and most appear to have at least the same levels of financial stability and career progression to those at larger firms. The only visible difference is usually that startup veterans had more companies listed on résumés, may have worked for and with some of the same people at different jobs, and some have a wider palette of technical skills. It’s reminiscent of a successful independent contractor’s background. Once You’re In, You’re InAfter the first startup boom/bust some in the industry tied company stability to career stability or employability, as if being associated with a failed startup might negatively impact future employment options. Many discovered the opposite was true, as those who failed were tagged startup veterans unlikely to repeat the same mistakes twice. I would expect that those who have worked for multiple startups would likely tell outsiders this: “Once you’re in, you’re in“. Let me explain. While any individual startup may not provide job stability, an ecosystem of startups will provide candidates with career stability and usually increased employability. When startups hire, most seek those with previous startup experience. It’s usually right there in the job descriptions. Remember Candidates A and B from earlier? Candidate A hopefully has a shot at a startup job, but B already has an interview. Due to the transient nature of startup employment and the trend of startup employees to stay within the startup ecosystem, the ability for those in the startup community to get introduced to new jobs via one’s network increases dramatically. When Startup X fails, the 50 employees migrate to perhaps 3o other startups. That gives Startup X alumni a host of employment possibilities, which should grow exponentially as additional startups rise and fall over time. In smaller cities one can become a known entity within the startup community, virtually guaranteeing employment for as long as startups exist and their reputation remains positive. Conclusion The concept of career stability has changed significantly as increased job movement has become an accepted industry characteristic. When one expects a higher number of job searches over the course of a career, proactive professionals will consider employability and marketability more carefully. Job security ≠ career security. If your main concern is being continuously employed at rising compensation levels, employability will often trump job security.Reference: Job Security, Career Stability, and Employability For Startups from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Using Google Guava Cache for local caching

Lot of times we would have to fetch the data from a database or another webservice or load it from file system. In cases where it involves a network call there would be inherent network latencies, network bandwidth limitations. One of the approaches to overcome this is to have a cache local to the application. If your application spans across multiple nodes then the cache will be local to each node causing inherent data inconsistency. This data inconsistency can be traded off for better throughput and lower latencies. But sometimes if the data inconsistency makes a significant difference then one can reduce the ttl (time to live) for the cache object thereby reducing the duration for which the data inconsistency can occur. Among a number of approaches of implementing local cache, one which I have used in a high load environment is Guava cache. We used guava cache to serve 80,000+ requests per second. And the 90th percentile of the latencies were ~5ms. This helped us scale with the limited network bandwidth requirements. In this post I will show how one can add a layer of Guava cache in order to avoid frequent network calls. For this I have picked a very simple example of fetching details of a book given its ISBN using the Google Books API. A sample request for fetching book details using ISBN13 string is: https://www.googleapis.com/books/v1/volumes?q=isbn:9781449370770&key={API_KEY} The part of response which is useful for us looks like:A very detailed explanation on the features of Guava Cache can be found here. In this example I would be using a LoadingCache. The LoadingCache takes in a block of code which it uses to load the data into the cache for missing key. So when you do a get on cache with an non existent key, the LoadingCache will fetch the data using the CacheLoader and set it in cache and return it to the caller. Lets now look at the model classes we would need for representing the book details:Book class Author classThe Book class is defined as: //Book.java package info.sanaulla.model;import java.util.ArrayList; import java.util.Date; import java.util.List;public class Book { private String isbn13; private List<Author> authors; private String publisher; private String title; private String summary; private Integer pageCount; private String publishedDate;public String getIsbn13() { return isbn13; }public void setIsbn13(String isbn13) { this.isbn13 = isbn13; }public List<Author> getAuthors() { return authors; }public void setAuthors(List<Author> authors) { this.authors = authors; }public String getPublisher() { return publisher; }public void setPublisher(String publisher) { this.publisher = publisher; }public String getTitle() { return title; }public void setTitle(String title) { this.title = title; }public String getSummary() { return summary; }public void setSummary(String summary) { this.summary = summary; }public void addAuthor(Author author){ if ( authors == null ){ authors = new ArrayList<Author>(); } authors.add(author); }public Integer getPageCount() { return pageCount; }public void setPageCount(Integer pageCount) { this.pageCount = pageCount; }public String getPublishedDate() { return publishedDate; }public void setPublishedDate(String publishedDate) { this.publishedDate = publishedDate; } } And the Author class is defined as: //Author.java package info.sanaulla.model;public class Author {private String name;public String getName() { return name; }public void setName(String name) { this.name = name; } Lets now define a service which will fetch the data from the Google Books REST API and call it as BookService. This service does the following:Fetch the HTTP Response from the REST API. Using Jackson’s ObjectMapper to parse the JSON into a Map. Fetch relevant information from the Map obtained in step-2.I have extracted out few operations from the BookService into an Util class namely:Reading the application.properties file which contains the Google Books API Key (I haven’t committed this file to git repository. But one can add this file in their src/main/resources folder and name that file as application.properties and the Util API will be able to read it for you) Making an HTTP request to REST API and returning the JSON response.The below is how the Util class is defined: //Util.java package info.sanaulla;import com.fasterxml.jackson.databind.ObjectMapper;import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.net.HttpURLConnection; import java.net.ProtocolException; import java.net.URL; import java.util.ArrayList; import java.util.List; import java.util.Properties;public class Util {private static ObjectMapper objectMapper = new ObjectMapper(); private static Properties properties = null;public static ObjectMapper getObjectMapper(){ return objectMapper; }public static Properties getProperties() throws IOException { if ( properties != null){ return properties; } properties = new Properties(); InputStream inputStream = Util.class.getClassLoader().getResourceAsStream("application.properties"); properties.load(inputStream); return properties; }public static String getHttpResponse(String urlStr) throws IOException { URL url = new URL(urlStr); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod("GET"); conn.setRequestProperty("Accept", "application/json"); conn.setConnectTimeout(5000); //conn.setReadTimeout(20000);if (conn.getResponseCode() != 200) { throw new RuntimeException("Failed : HTTP error code : " + conn.getResponseCode()); }BufferedReader br = new BufferedReader(new InputStreamReader( (conn.getInputStream())));StringBuilder outputBuilder = new StringBuilder(); String output; while ((output = br.readLine()) != null) { outputBuilder.append(output); } conn.disconnect(); return outputBuilder.toString(); } } And So our Service class looks like: //BookService.java package info.sanaulla.service;import com.fasterxml.jackson.databind.ObjectMapper; import com.google.common.base.Optional; import com.google.common.base.Strings;import info.sanaulla.Constants; import info.sanaulla.Util; import info.sanaulla.model.Author; import info.sanaulla.model.Book;import java.io.IOException; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.Arrays; import java.util.List; import java.util.Map; import java.util.Properties;public class BookService {public static Optional<Book> getBookDetailsFromGoogleBooks(String isbn13) throws IOException{ Properties properties = Util.getProperties(); String key = properties.getProperty(Constants.GOOGLE_API_KEY); String url = "https://www.googleapis.com/books/v1/volumes?q=isbn:"+isbn13; String response = Util.getHttpResponse(url); Map bookMap = Util.getObjectMapper().readValue(response,Map.class); Object bookDataListObj = bookMap.get("items"); Book book = null; if ( bookDataListObj == null || !(bookDataListObj instanceof List)){ return Optional.fromNullable(book); }List bookDataList = (List)bookDataListObj; if ( bookDataList.size() < 1){ return Optional.fromNullable(null); }Map bookData = (Map) bookDataList.get(0); Map volumeInfo = (Map)bookData.get("volumeInfo"); book = new Book(); book.setTitle(getFromJsonResponse(volumeInfo,"title","")); book.setPublisher(getFromJsonResponse(volumeInfo,"publisher","")); List authorDataList = (List)volumeInfo.get("authors"); for(Object authorDataObj : authorDataList){ Author author = new Author(); author.setName(authorDataObj.toString()); book.addAuthor(author); } book.setIsbn13(isbn13); book.setSummary(getFromJsonResponse(volumeInfo,"description","")); book.setPageCount(Integer.parseInt(getFromJsonResponse(volumeInfo, "pageCount", "0"))); book.setPublishedDate(getFromJsonResponse(volumeInfo,"publishedDate",""));return Optional.fromNullable(book); }private static String getFromJsonResponse(Map jsonData, String key, String defaultValue){ return Optional.fromNullable(jsonData.get(key)).or(defaultValue).toString(); } } Adding caching on top of the Google Books API call We can create a cache object using the CacheBuilder API provided by Guava library. It provides methods to set properties likemaximum items in cache, time to live of the cache object based on its last write time or last access time, ttl for refreshing the cache object, recording stats on the cache like how many hits, misses, loading time and providing a loader code to fetch the data in case of cache miss or cache refresh.So what we would ideally want is that a cache miss should invoke our API written above i.e getBookDetailsFromGoogleBooks. And we would want to store maximum of 1000 items and expire the items after 24 hours. So the piece of code which builds the cache looks like: private static LoadingCache<String, Optional<Book>> cache = CacheBuilder.newBuilder() .maximumSize(1000) .expireAfterAccess(24, TimeUnit.HOURS) .recordStats() .build(new CacheLoader<String, Optional<Book>>() { @Override public Optional<Book> load(String s) throws IOException { return getBookDetailsFromGoogleBooks(s); } }); Its important to note that the maximum items which you want to store in the cache impact the heap used by your application. So you have to carefully decide this value depending on the size of each object you are going to cache and the maximum heap memory allocated to your application. Lets put this into action and also see how the cache stats report the stats: package info.sanaulla;import com.google.common.cache.CacheStats; import info.sanaulla.model.Book; import info.sanaulla.service.BookService;import java.io.IOException; import java.util.Properties; import java.util.concurrent.ExecutionException;public class App { public static void main( String[] args ) throws IOException, ExecutionException { Book book = BookService.getBookDetails("9780596009205").get(); System.out.println(Util.getObjectMapper().writeValueAsString(book)); book = BookService.getBookDetails("9780596009205").get(); book = BookService.getBookDetails("9780596009205").get(); book = BookService.getBookDetails("9780596009205").get(); book = BookService.getBookDetails("9780596009205").get(); CacheStats cacheStats = BookService.getCacheStats(); System.out.println(cacheStats.toString()); } } And the output we would get is: {"isbn13":"9780596009205","authors":[{"name":"Kathy Sierra"},{"name":"Bert Bates"}],"publisher":"\"O'Reilly Media, Inc.\"","title":"Head First Java","summary":"An interactive guide to the fundamentals of the Java programming language utilizes icons, cartoons, and numerous other visual aids to introduce the features and functions of Java and to teach the principles of designing and writing Java programs.","pageCount":688,"publishedDate":"2005-02-09"} CacheStats{hitCount=4, missCount=1, loadSuccessCount=1, loadExceptionCount=0, totalLoadTime=3744128770, evictionCount=0} This is a very basic usage of Guava cache and I wrote it as I was learning to use this. In this I have made use of other Guava APIs like Optional which helps in wrapping around existent or non-existent(null) values into objects. This code is available on git hub- https://github.com/sanaulla123/Guava-Cache-Demo. There will be concerns such as how it handles concurrency which I havent gone detail into. But under the hood it uses a segmented Concurrent hash map such that the gets are always non-blocking, but the number of concurrent writes would be decided by the number of segments. Some of the useful links related to this: http://guava-libraries.googlecode.com/files/ConcurrentCachingAtGoogle.pdf  Reference: Using Google Guava Cache for local caching from our JCG partner Mohamed Sanaulla at the Experiences Unlimited blog....

JVM is down with “OutOfMemory” error – what should I do?

Amazing as it may seem, but this particular cry “From the Depths” is frequently displayed among the results of search requests regarding JVM settings. You have probably been faced with the “I remember that option, but how to enable it” problem, while administrating servers or adjusting virtual appliances at times (semi-annualy, for example) and apart from your main tasks. There is no wonder in it, as basic settings are easy to forget, if they are rarely used. So, what do you do in this case? Come up on Google, of course, in hope to find the answer in a maximum of 20 minutes. This approach may help if you are trying to solve simple tasks. But when you perform delicate adjustments of the JVM, complications may appear. Sometimes you have to cut off junk information for newcomers, or even get into a muddle of terms and different approaches of resolving one and the same task, as there are loads of denominations for identical options in different sources. In the long run, a working day is over and the task is far from being resolved. Java Developer’s Crib Who said that only students need cribs? When we search for necessary information in the Internet, we add the most useful pages to bookmarks, which are cribs, in fact. But the only way to organize them is to create topical folders. Yet, it is impossible to filter texts. Fortunately, a new resource jvmmemory.com has been recently created to accumulate relevant information for Java developers of any level and choose only necessary information with the help of a user interface, with everything unwanted cut off. This idea has originated with Leonid Vygovsky – the leader of the development team from St. Petersburg, Russia, p.h.d, , teaching assistant of  Saint-Petersburg State electrotechnical university LETI and the author of various publications and patents. A short interview with Leonid is presented below, with the pros of this effective project described. Save’n’use! Leonid, tell us about your resource. The site is dedicated to JVM settings, to memory operation tuning, firstly, as 99% of all adjustment concerns memory. JVM in itself presents sufficiently little information about its settings. The Internet in turn provides a lot of information, which can be incorrect, as well as outdated. The site accumulates the checked settings, which will come in handy to the majority of developers. It also provides links to selected resources, which are dedicated to the JVM garbage collector. Tell us a bit more about the garbage collector Algorithms of garbage collection are variously named by different authors, and it involves difficulties of some kind. The site contains the unification of all existing names of collectors and their short descriptions. There are two stages of garbage collection in JVM: garbage objects of young generation at first and then of tenured generation. You are free to choose different algorithms for varied generations accordingly to particular scheme. What was the reason for creating the site? I have always been interested in the development of those applications, which are really useful for people. So, when I once again was searching in Google for permgen settings in java, I decided to create a small and simple utility for the adjustment of these parameters. Besides, I have always desired to search for new tools while being in charge of project development, so that these tools improve developers’ efficiency. I strongly believe, that this JavaScript-based project with AngularJS framework is a challenging idea.What makes the site unique and convenient besides the crib-function? JVM scarcely ever displays errors while setting up contradictory applications.  The resource in its turn allows user to do correct settings only. The site does not contain the full set of options – only the most necessary of them which helps to resolve optimization issues. Furthermore, outdated deprecated options are deleted and dangerous options are marked. The options are selected accordingly to Pareto principle – 20% of efforts results in 80% of outcome. The evaluation was based on periodicity of mentioning an option in different sources and their trustworthiness. Were there any complexities during the deployment process? There were no technical complexities. Yet, there were problems with different algorithm names and cutting off incorrect information, regarding settings. The realization of UI memory garbage collection scheme and defining which set of settings is more preferable for each collector were the most complicated tasks. I tried to emphasize the structure of the site. While teaching the students, I realized, that the way of handling the material is of paramount importance. The structurization and logic grouping help a great deal in digestion process. I followed the same principle here, but it was not as simple as i had expected. In what way do you plan to develop the project? If the project draws the attention of the community, it will be developed socially. The main points of development are adding more information and feature which enables  personal settings saving. Community feedback will certainly be taken into consideration. Could you come up with the slogan of your project on the spot? Save’n’use! Check out the site!   ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: