Featured FREE Whitepapers

What's New Here?


Knowing the bits

We use complex systems. My mother once said that there could be little leprechauns behind the TV screen redrawing the screen 50 times a second she could not care. (At least she new that the TV in Europe had 50 (half) screens every second.) Most of the people do not care about the electronics and the softwar around us. The trend is that this technology penetration is going to be even more dense. Electronics gets cheaper, programming becomes easier and soon toilet papers will have one-time-use embedded computers on it. (Come up with a good application!) Face recognition is not the privilege of NSA, CIA, KG or Mosad and the technology spread does not stop at the level of big corporations like FB, or Google. Shops start to install cameras and software that recognizes and identifies frequent buyers helping the work of the sales. People get used to it and IT personnel are not different, are we? Kind of yes. The difference is that we are interested in the details of those leprechauns how they do their job. We know that these days there are liquid crystals in the screen, they are controlled by low voltage signals (at least compared to the voltages of the former CRT solutions) and that there is a processor in the TV/toaster/toilet paper and it is programmed in a language called e.g. Java. We, Java programmers, program these applications and we not only use the language (including RT) but also layered software, frameworks. How do these layered software work? Should we understand or should we just use it and hope that it works? The more you know a framework the better you can use it. Better means faster, more reliable, creating code that is more likely to be compatible with future versions. On the other hand there should be a reasonable stop when you have to halt learning and start using. There is no point to know all the details of a framework, if you never start using it. You should aim for the value you generate. On the other end of the line however, if you do not have enough knowledge of the framework you may end up using a hammer digging a hole instead of a shovel. I usually feel confident when my knowledge reaches the level of understanding that I know how they (the developers of the framework) did it. When I can bravely say: If I had time (sometimes perhaps more than lifetime of a single person) I could develop that framework myself. Of course, I will not, because I do not have the time and also, more importantly because there is no point developing something that is already developed with appropriate quality. Or is there? I could do it better. I have heard that many times from junior programmers and from programmers, who considered themselves not that junior. The correct attitude would have been: I could do it better, but I won’t because it is done and is good enough. You do not need the best. You just need a solution that is good enough. There is no point to invest more if there is no extra leverage. There is no point to invest more even if there is leverage but it is lower than the investment in other areas would be higher. Generally that is it when you are professional. Face it!Reference: Knowing the bits from our JCG partner Peter Verhas at the Java Deep blog....

Live Templates in IntelliJ

As described here, IntelliJ’s live templates let you easily insert predefined code fragments into your source code. I have posted some of my most used templates below, a link to my complete list of template files on GitHub (as a reference for myself when I setup new IntelliJ environments) and the steps I took to add the IntelliJ settings file to GitHub. For example, I set up a template such that I can type test, hit tab, and it will insert this JUnit code snippet for me:    @Test public void $NAME$() { $END$ } It is a JUnit test method, with the cursor initially placed after “public void”, ready for typing the test name. The cursor then jumps to between the {}s, ready to start writing the test. IntelliJ templates are stored in a user.xml file at ~/Library/Preferences/<product name><version number>/templates For example, for IntelliJ13, it is ~/Library/Preferences/IntelliJIdea13/templates/user.xml Some of my other templates are listed below, with the trigger in bold. So that I can use these templates on any IntelliJ (e.g. work and home), I have checked my complete list in here at GitHub. before @Before public void setup() { $END$ } after @After public void tearDown() { $END$ } nyi fail("Not yet implemented"); puv public void $NAME$() { $END$ } main public static void main(String[] args){ $END$ } Steps I took to add the IntelliJ settings to GitHub First, I setup a new repo in GitHub at https://github.com/sabram/IntelliJ Then, I followed some instructions from this StackOverflow posting on How to convert existing non-empty directory into a Git working directory: cd ~/Library/Preferences/IntelliJIdea13 git init git add templates/user.xml git commit -m 'initial version of IntelliJ user.xml' git remote add myIntelliJRepo https://github.com/sabram/IntelliJ.git At this point, I got an error suggesting I needed to do a git pull first. But when I did a git pull saIntelliJ I got an error saying You asked to pull from the remote 'saIntelliJ', but did not specify a branch. Because this is not the default configured remote for your current branch, you must specify a branch on the command line. So, I edited .git/config based on this posting, to include [branch "master"] remote = saIntelliJ merge = refs/heads/master Then I was able to do git pull saIntelliJ git push -u saIntelliJ master successfully, and can just use git pull and git push going forward, with no need to specify the repo name (saIntelliJ) each time.Reference: Live Templates in IntelliJ from our JCG partner Shaun Abram at the Shaun Abram’s blog blog....

Dropwizard: painless RESTful JSON HTTP web services

Java developers looking for a quick, painless way of creating production-ready RESTful JSON HTTP web services should consider the Dropwizard framework. Dropwizard brings together well-regarded libraries that compliment each other so you can get to what’s important: writing and delivering working code. For those interested in details on the libraries used, please refer to the Dropwizard overview. Fortunately, Dropwizard doesn’t make you deal with all of its individual components. You’ll be able to keep your focus on your work at hand. If you’ve got some time, stick around and let’s make something with Dropwizard. All code for this tutorial is available at GitHub.   How do you get started with Dropwizard? A single Maven, Gradle, or Ivy dependency will get you all the components necessary for making Dropwizard-powered web services. <dependency> <groupId>com.yammer.dropwizard</groupId> <artifactId>dropwizard-core</artifactId> <version>0.6.2</version> </dependency> Note: Please refer to Dropwizard’s excellent documentation if you encounter anything you think isn’t explained sufficiently in this short posting. What shall we make? Let’s make a web service that returns the current date and time for a given timezone. We’ll use a configurable default timezone if a client decides not to specify one. Configuration Our super-simple time-service.yml configuration file will look like this. defaultTimezone: UTC Behind the scenes, Dropwizard will load, parse, validate, and turn that configuration into an object. All we need to do is specify it as a class. public class TimezoneConfiguration extends Configuration { @NotEmpty @JsonProperty private String defaultTimezone;public String getDefaultTimezone() { return defaultTimezone; } } Service Output Let’s say we want the output of our web service to look like this. { "time": "2014-02-04 13:45:02" } The corresponding class is straightforward. public class Time { private final String time;public Time(String time) { this.time = time; }public String getTime() { return time; } } Resource Next, we decide we want the URL path for our web service to be /time. And we need to specify the resource will return JSON. Putting those together gives us this. @Path("/time") @Produces(MediaType.APPLICATION_JSON) public class TimeResource { } The only RESTful action that makes sense right now for our demo web service is GET, so let’s make a method for that. When consuming our web service, the client can provide a timezone as a query string parameter. @GET public Time getTime(@QueryParam("timezone") String timezone) { } That leaves us with three more things to do:handle a given timezone from the client substitute a default timezone if none is given format the current date and time with the timezone@Path("/time") @Produces(MediaType.APPLICATION_JSON) public class TimeResource { private final String defaultTimezone;public TimeResource(String defaultTimezone) { this.defaultTimezone = defaultTimezone; }@GET public Time getTime(@QueryParam("timezone") Optional timezone) { DateFormat formatter = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); TimeZone timeZone = TimeZone.getTimeZone(timezone.or(defaultTimezone)); formatter.setTimeZone(timeZone); String formatted = formatter.format(new Date()); return new Time(formatted); } } Service Now, let’s bring together all the pieces of our web service in our entry-point class we’ll call TimeService. Here we’ll use our TimezoneConfiguration to pass the default timezone to TimeResource. public class TimeService extends Service { public static void main(String[] args) throws Exception { new TimeService().run(args); }@Override public void run(TimezoneConfiguration config, Environment environment) { String defaultTimezone = config.getDefaultTimezone(); TimeResource timeResource = new TimeResource(defaultTimezone); environment.addResource(timeResource); }@Override public void initialize(Bootstrap timezoneConfigurationBootstrap) { } } Pencils Down That’s it! We’ve just written a Dropwizard-based web service without mind-numbing boilerplate or mounds of obtuse XML configuration. Running Running your web service is as simple as executing a command-line Java application – no need to worry about .war files or servlet containers. java -cp libraries/* name.christianson.mike.TimeService server time-service.yml Now, point your web browser or curl at http://localhost:8080/time?timezone=MST and have fun!All code for this tutorial is available at GitHub.Reference: Dropwizard: painless RESTful JSON HTTP web services from our JCG partner Mike Christianson at the CodeAwesome blog....

How to get rid of helper and utils classes

There you go once again, while performing a code review or after having justified a quick coding in the name of urgency and priority: it clearly stand in front of you yet another helper class. But everything works fine and the show must go on, release after release, so that helper class soon becomes a monster class, providing tons of static methods, freely growing in its utils package, often a no man land of technical debts where the object oriented design didn’t dare to step in. The provided facility is centralized and hence DRY – would shout some developer, may be its coder. It’s fast because everything is static – may claim somebody else within the team, may be the one who added another static method in its list. It’s easy to use, we kept it simple – you could hear in the room, yet another misunderstanding of KISS. We could argue that often helper and utils classes come to hand really easily, especially when we can’t modify the right target class of the new functionality (located in an external library, for instance) or we actually can’t find that target (unclear domain model, PoC, lack of requirements), or we simply don’t want to find it (laziness, the main global source of helper classes). The big problem though is that’s clearly not an object oriented solution and by the time (and by lack of team communication, resources rotation, quick fixes and workarounds) it could lead to endless containers of static methods and maintenance headaches (you wanted to be DRY, but you end up having ten methods providing almost the same functionality, if not the same; you wanted to be fast, but now you can’t easily add a cache mechanism in that static monster or you get in troubles with concurrency; you wanted to keep things simple, but your IDE now provides a long list of heterogeneous methods which doesn’t simplify your task). But don’t worry, we’ll try to solve it. Let’s refactor that helper class Firstly, we need a definition of our target problem: a stateless class (with that special Helper or Utils suffix) which provides only static methods, never instantiated as an object in the project, without a clear responsibility. Secondly, we need an almost* deterministic approach to solve the problem. That almost stands for exceptions and project peculiarity: the final decision really depends on specific scenarios which would just vanish any claim of universal solution. We’ll need eventually to analyse the given class and try to:Find the target class to which a certain static method should belong or Find the target business domain which the class actually provides and hence transform it in a related component, renaming it and removing static methods (replacing them by behaviours), or Add a new class providing one or more behaviours (a previously existing static method) via an object oriented approach.Any of the above solutions would provide us a better model. Let’s then get to the point through the following steps (assuming project refactoring accordingly for each step):To facilitate our task, let’s remove any unused method of our helper class within the project (your IDE will definitely help you). Let’s then set the class definition to final. Did you get any compilation error within the project? If yes, why that helper or utils class was extended? You may already have a target though: the child class. If the child class is yet another helper class (really?), merge it with its parent. If not already there, let’s add a private constructor to the class. Did you get any compilation error within the project? Then somewhere the class was actually instantiated, hence it wasn’t a pure helper class or it wasn’t used correctly. Look at those callers, you may spot a target class (or domain) to which a method or a whole set of methods may belong. Let’s group the class methods by a certain affinity, similar signatures, breaking down the helper class in smaller helper classes (from miscellaneous to correlated methods, that affinity may be our target domain indeed). Often at this point we would move from a large utils class to lighter helper classes (tip: don’t be afraid of creating classes with just one method at this point), narrowing our scopes (from ProjectUtils to CarHelper, EngineHelper, WheelHelper, etc.). (Hey, isn’t your code already getting cleaner?). If any of these new classes has only one method, let’s check its usage. If we got only one caller, lucky you, that’s our target class! You could move the method to that class as behaviour or private method (keeping its static marker or getting advantage of internal state). The helper class disappeared. In each helper class we got so far (it could actually be your starting point though) identify a common state among these correlated methods. Tip: look for a common parameter most of those methods have (i.e. all methods take as input a Car object), that’s an alert, these methods should probably belong to the Car class (or an extension? A wrapper?) as behaviours. Otherwise, that common parameter may become a class field, a state, which could be passed to a constructor and used by all the (non static any more then) methods. That state would suggest you the prefix of the class, methods affinity could suggest a class of behaviours (CarValidator, CarReader, CarConverter and so on). The helper class disappeared. If the family of methods uses different parameters, depending on optional input or representations of the same input, then consider transforming the Helper via a fluent interface using the Builder pattern: from a collection of static methods like Helper.calculate(x), calculate(x, y), calculate(x, z), calculate(y, z) we could easily get to something like newBuilder().with(x).with(y).calculate(). The helper class would then offer behaviours, reduce its list of business methods and provide more flexibility for future extensions. Callers would then use it as internal field for reuse or instantiate it where needed. The helper class (as we knew it) disappeared. If the helper class provides methods which are actually actions for different inputs (but, at this point, for the same domain), consider applying the Command pattern: the caller will actually create the required command (which will handle the necessary input and offer a behaviour) and an invoker will execute it within a certain context. You may get a command implementation for each static method and your code would move from an Helper.calculate(x, y), calculate(z) to something like invoker.calculate(new Action(x, y)). Bye bye helper class. If the helper class provides methods for the same input but different logics, consider applying the Strategy pattern: each static method may easily become a strategy implementation, vanishing the need of its original helper class (replaced by a context component then). If the given set of static methods concerns a certain class hierarchy or a defined collection of components, then consider applying the Visitor pattern: you may get several visitor implementations providing different visit methods which would probably replace partially or entirely the previously existing static methods. If none of the above cases met your criteria, then apply the three most important indicators: your experience, your competences in the given project and common sense.Conclusion The process is pretty clear, looking for the right domain and a reasonable target class or considering refactoring the given helper class via a standard approach applying an object oriented design (with an increased code complexity in some cases though, worth it?). Going through the list of aforementioned cases, more then one bell would probably help you while trying to understand how to accomplish this often not too easy refactoring; specific constraints may limit certain solutions; complexity of static methods and involved flows may require several phases of refactoring, refining it till an acceptable result. Or you may choose to stick with that helper class (hopefully applying at least the first 5 steps above), in the name of code readability and simplicity, to a certain extent. Helper classes are not universally evil, but too often you don’t actually need them.Reference: How to get rid of helper and utils classes from our JCG partner Antonio Di Matteo at the Refactoring Ideas blog....

Driving Devops

There is a lot of talk in the devops community about the importance of sharing principles and values, and about silo busting: breaking down the “wall of confusion” between developers and operations to create agile, cross-functional teams. Radical improvement through fundamental organizational changes and building an entirely new culture. But it doesn’t have to be that hard. All it took us was 3 simple, but important, steps. Reliability First When we first launched our online platform, things were pretty crazy. Sales was busy with customer feedback and onboarding more customers. Development was still finishing the backlog of features that were supposed to already be done, responding to changes from sales and partners, and helping to support the system. Ops was trying to stabilize everything, help onboard more customers and address performance issues as more customers came on. We were all rushing forwards, but not always in the same direction. Our CEO recognized this and made an important decision. He made reliability the #1 priority – the reliability and integrity of our systems and of our customers’ data, and the services we provided. For everyone: not just ops, but development, sales, marketing, compliance, admin. Above everything else. It was more important not to mess up the customers that we had than to get new customers or hit deadlines or cut costs. Reliability, resilience, integrity have remained our #1 driver for the company over several years as we continued to grow. This meant that everyone was working towards the same goals – and the goals were easy to understand and measure: improving MTTF, MTTD and MTTR windows; reducing bug counts and variability in response time, improving results of audits and pen tests. It gave people more reasons to work together at more levels.. It reduced politics and conflicts to a minimum. Development’s first priority changed from pushing features out ASAP to making sure that the system was running optimally and that any changes wouldn’t negatively impact customers. This meant more time spent with ops on understanding the run-time, more time troubleshooting operational issues, more reviews and regression testing and stress testing, anticipating compatibility issues, planning for roll-back and roll-forward recovery. Smaller, more frequent releases Spending some more time on testing and reviews and working with ops meant that it took longer to complete a feature. But we still had to keep up with customer demands – we still had to deliver. We did this by shortening the software release cycle, from 2-3 months to 2-3 weeks or sometimes shorter. Delivering less in each release, sometimes only 1 new feature or some fixes, but delivering much more often. If a change or feature had to be delayed, if developers or testers needed more time to make sure that it was ready, it wasn’t a big deal – if something wasn’t ready this week, it would be ready next week or soon after, still fast enough for the business and for customers. Planning and working in shorter horizons meant that development could respond faster to changes in direction and changing priorities, so developers were always focused on delivering what was most important at the time. Shorter releases drove development to be more agile, to think and work faster. To automate more testing. To pay more attention to deployment, make the steps simpler and more efficient – and safer. Fewer changes batched together made it easier to review and test. Less chances to make mistakes. Easier to understand what went wrong when we did make a mistake, and less time to fix it. RCA – Learn from Mistakes We still made mistakes, shit still happened. When something went seriously wrong, it was my job to explain it to our customers and owners. What went wrong, why, and what we were going to do to make sure that it didn’t happen again. We didn’t know about blameless post mortems, but this is the way we did it anyway. We got developers and testers and ops and managers together in Root Cause Analysis sessions to carefully examine what happened, what went wrong, understand why, and fix it. We made sure that people focused on the facts and on problem solving: what happened, what happened next, what did we see, what didn’t we see, why? What could we do to fix it or to prevent it from happening again or to recognize and respond to problems like this more effectively in the future? Better training, better tools, better procedures, better documentation, better error handling, better testing and reviews, better configuration checks and run-time checking, better information and better ways of communicating it. Focusing on details and problems, not people. Proving that it was ok to make mistakes, but not ok to hide them. We got much better: at operations, testing, design, deployment, monitoring, incident handling. And better as an organization. We built transparency and trust within and across teams. We learned how to move forward from failure, and to be more resilient and confident in our ability to deal with serious problems. Delivering Better and Faster Together We didn’t restructure or change who we were as an organization. Dev and ops still work in separate organizations for different managers in different countries. They have their own projects and their own ways of working, and they don’t always speak the same language or agree on everything. We have lots of checks and balances and handoffs and paperwork between dev and ops to make sure that things are done properly and to make the regulators happy. There are still more steps that we could automate or simplify, more we can do to build out our Continuous Delivery pipelines, more things we can get out of Puppet and Vagrant and other cool tools. But if devops is about developers and operations sharing responsibility for the system, trusting each other and helping each other to make sure that the system is always working correctly and optimally, looking for better solutions together, delivering better and faster – then we’ve been doing devops for a while now.Reference: Driving Devops from our JCG partner Jim Bird at the Building Real Software blog....

Groovy Goodness: Define Compilation Customizers With Builder Syntax

Since Groovy 2.1 we can use a nice builder syntax to define customizers for a CompileConfiguration instance. We must use the static withConfig method of the class CompilerCustomizationBuilder in the package org.codehaus.groovy.control.customizers.builder. We pass a closure with the code to define and register the customizers. For all the different customizers like ImportCustomizer, SecureASTCustomizers and ASTTransformationCustomizer there is a nice compact syntax. In the following sample we use this builder syntax to define different customizers for a CompileConfiguration instance:   package com.mrhaki.blogimport org.codehaus.groovy.control.customizers.ASTTransformationCustomizer import org.codehaus.groovy.control.CompilerConfiguration import org.codehaus.groovy.control.customizers.builder.CompilerCustomizationBuilder import groovy.transform.*def conf = new CompilerConfiguration()// Define CompilerConfiguration using // builder syntax. CompilerCustomizationBuilder.withConfig(conf) { ast(TupleConstructor) ast(ToString, includeNames: true, includePackage: false) imports { alias 'Inet', 'java.net.URL' } secureAst { methodDefinitionAllowed = false } }def shell = new GroovyShell(conf) shell.evaluate ''' package com.mrhaki.blogclass User { String username, fullname }// TupleConstructor is added. def user = new User('mrhaki', 'Hubert A. Klein Ikkink')// toString() added by ToString transformation. assert user.toString() == 'User(username:mrhaki, fullname:Hubert A. Klein Ikkink)'// Use alias import. def site = new Inet('http://www.mrhaki.com/') assert site.text '''Code written with Groovy 2.2.2.Reference: Groovy Goodness: Define Compilation Customizers With Builder Syntax from our JCG partner Hubert Ikkink at the JDriven blog....

The 7 Log Management Tools Java Developers Should Know

Splunk vs. Sumo Logic vs. LogStash vs. GrayLog vs. Loggly vs. PaperTrails vs. Splunk>Storm Splunk, Sumo Logic, LogStash, GrayLog, Loggly, PaperTrails – did I miss someone? I’m pretty sure I did. Logs are like fossil fuels – we’ve been wanting to get rid of them for the past 20 years, but we’re not quite there yet. Well, if that’s the case I want a BMW! To deal with the growth of log data a host of log management & analysis tools have been built over the last few years to help developers and operations make sense of the growing data. I thought it’d be interesting to look at our options and what are each tools’ selling point, from a developer’s standpoint. Splunk As the biggest tool in this space, I decided to put Splunk in a category of its own. That’s not to say it’s the best tool for what you need, but more to give credit to a product who essentially created a new category. Pros Splunk is probably the most feature rich solution in the space. It’s got hundreds of apps (I counted 537) to make sense of almost every format of log data, from security to business analytics to infrastructure monitoring. Splunk’s search and charting tools are feature rich to the point that there’s probably no set of data you can’t get to through its UI or APIs. Cons Splunk has two major cons. The first, that is more subjective, is that it’s an on-premise solution which means that setup costs in terms of money and complexity are high. To deploy in a high-scale environment you will need to install and configure a dedicated cluster. As a developer, it’s usually something you can’t or don’t want to do as your first choice. Splunk’s second con is that it’s expensive. To support a real-world application you’re looking at tens of thousands of dollars, which most likely means you’ll need sign offs from high-ups in your organization, and the process is going to be slow. If you’ve got a new app and you want something fast that you can quickly spin up and ramp as things progress – keep reading. Some more enterprise log analyzers can be found here. SaaS Log Analyzers Sumo Logic Sumo was founded as a SaaS version of Splunk, going so far as to imitate some of splunk’s features and visuals early on. Having said that, SL has developed to a full fledged enterprise class log management solution. Pros SL is chock-full of features to reduce, search and chart mass amounts of data. Out of all the SaaS log analyzers, it’s probably the most feature rich. Also, being a SaaS offering it inherently means setup and ongoing operation are easier. One of Sumo Logic’s main points of attraction is the ability to establish baselines and to actively notify you when key metrics change after an event such as a new version rollout or a breach attempt. Cons This one is shared across all SaaS log analyzers, which is you need to get the data to the service to actually do something with it. This means that you’ll be looking at possible GBs (or more) uploaded from your servers. This can create issues on multiple fronts -As a developer, if you’re logging sensitive or PII you need to make sure it’s redacted. There may be a lag between the time data is logged and the time it’s visible to to the service. There’s additional overhead on your machines transmitting GBs of data, which really depends on your logging throughput.Sumo’s pricing is also not transparent, which means you might be looking at a buying process which is more complex than swiping your team’s credit card to get going. Loggly Loggly is also a robust log analyzer, focusing on simplicity and ease of use for a devops audience.Pros Whereas Sumo Logic has a strong enterprise and security focus, Loggly is geared more towards helping devops find and fix operational problems. This makes it very developer-friendly. Things like creating custom performance and devops dashboards are super-easy to do. Pricing is also transparent, which makes start of use easier. Cons Don’t expect Loggly to scale into a full blown infrastructure, security or analytics solution. If you need forensics or infrastructure monitoring you’re in the wrong place. This is a tools mainly for devops to parse data coming from your app servers. Anything beyond that you’ll have to build yourself. PaperTrails PaperTrails is a simple way to look and search through logs from multiple machines, in one consolidated easy-to-use interface. Think of it like tailing your log in the cloud, and you won’t be too far off.Pros PT is what it is. A simple way to look at log files from multiple machines in a singular view in the cloud. The UX itself is very similar to looking at a log on your machine, and so are the search commands. It aims to do something simple and useful, and does it elegantly. It’s also very affordable. Cons PT is mostly text based. Looking for any advanced integrations, predictive or reporting capabilities? You’re barking up the wrong tree. Splunk>Storm This is Splunk’s little (some may say step) SaaS brother. It’s a pretty similar offering that’s hosted on Splunk’s servers. Pros Storm lets you experiment with Splunk without having to install the actual software on-premise, and contains much of the features available in the full version. Cons This isn’t really a commercial offering, and you’re limited in the amount of data you can send. It seems to be more of an online limited version of Splunk meant to help people test out the product without having to deploy first. A new service called Splunk Cloud is aimed at providing a full-blown Splunk SaaS experience. Open Source Analyzers Logstash Logstash is an open source tool for collecting and managing log files. It’s part of an open-source stack which includes ElasticSearch for indexing and searching through data and Kibana for charting and visualizing data. Together they form a powerful Log management solution.Pros Being an open-source solution means you’re inherently getting a lot of a control and a very good price. Logstash uses three mature and powerful components, all heavily maintained, to create a very robust and extensible package. For an open-source solution it’s also very easy to install and start using. We use Logstash and love it. Cons As Logstash is essentially a stack, it means you’re dealing with three different products. That means that extensibility also becomes complex. Logstash filters are written in Ruby, Kibana is pure javascript and ElasticSearch has its own REST API as well as JSON templates. When you move to production, you’ll also need to separate the three into different machines, which adds to the complexity. Graylog2 A fairly new player in the space, GL2 is an open-source log analyzer backed by MongoDB as well as ElasticSearch (similar to Logstash) for storing and searching through log errors. It’s mainly focused on helping developers detect and fix errors in their apps. Also in this category you can find fluentd and Kafka whose one of its main use-cases is also storing log data. Phew, so many choices! Takipi for LogsWhile this post is not about Takipi, I thought there’s one feature it has which you might find relevant to all of this. The biggest disadvantage in all log analyzers and log files in general, is that the right data has to be put there by you first. From a dev perspective, it means that if an exception isn’t logged, or the variable data you need to understand why it happened isn’t there, no log file or analyzer in the world can help you. Production debugging sucks! One of the things we’ve added to Takipi is the ability to jump into a recorded debugging session straight from a log file error. This means that for every log error you can see the actual source code and variable values at the moment of error. You can learn more about it here. This is one post where I would love to hear from you guys about your experiences with some of the tools mentioned (and some that I didn’t). I’m sure there are things you would disagree with or would like to correct me on – so go ahead, the comment section is below and I would love to hear from you.Reference: The 7 Log Management Tools Java Developers Should Know from our JCG partner Tal Weiss at the Takipi blog....

Working with Google Analytics API v4 for Android

For v4 of the Google Analytics API for Android, Google has moved the implementation into Google Play Services. As part of the move the EasyTracker class has been removed, but it still possible to get a fairly simple ‘automatic’ Tracker up and running with little effort. In this post I’ll show you how.               Assumptions:You’re already using the Google Analytics v3 API EasyTracker class and just want to do a basic migration to v4 – or - You just want to set up a basic analytics Tracker that sends a Hit when the user starts an activity You already have the latest Google Play Services up and running in your Android appLet’s get started. Because you already have the Google Play Services library in your build, all the necessary helper classes will already be available to your code (if not see here). In the v4 Google Analytics API has a number of helper classes and configuration options which can make getting up and running fairly straight forwards, but I found the documentation to be a little unclear, so here’s what to do… Step 1. Create the following global_tracker.xml config file and add it to your android application’s res/xml folder. This will be used by GoogleAnalytics class as it’s basic global config. You’ll need to customise screen names for your app. Note that there is no ‘Tracking ID’ in this file – that comes later. Of note here is the ga_dryRun element which is used to switch on or off the sending of tracking reports to Google Analytics. You can use this setting in debug to prevent live and debug data getting mixed up. <?xml version="1.0" encoding="utf-8"?> <resources xmlns:tools="http://schemas.android.com/tools" <span style="line-height: 1.5; font-style: inherit; font-weight: inherit;">tools:ignore="TypographyDashes"></span><!-- the Local LogLevel for Analytics --> <string name="ga_logLevel">verbose</string><!-- how often the dispatcher should fire --> <integer name="ga_dispatchPeriod">30</integer><!-- Treat events as test events and don't send to google --> <bool name="ga_dryRun">false</bool><!-- The screen names that will appear in reports --> <string name="com.mycompany.MyActivity">My Activity</string> </resources> Step 2. Now add a second file, “app_tracker.xml” to the same folder location (res/xml). There are a few things of note in this file. You should change the ga_trackingId to the Google Analytics Tracking Id for your app (you get this from the analytics console). Setting ga_autoActivityTracking to ‘true’ is important for this tutorial – this makes setting-up and sending tracking hits from your code much simpler. Finally, be sure to customise your screen names, add one for each activity where you’ll be adding tracking code. <?xml version="1.0" encoding="utf-8"?> <resources xmlns:tools="http://schemas.android.com/tools" <span style="line-height: 1.5; font-style: inherit; font-weight: inherit;">tools:ignore="TypographyDashes"></span><!-- The apps Analytics Tracking Id --> <string name="ga_trackingId">UX-XXXXXXXX-X</string><!-- Percentage of events to include in reports --> <string name="ga_sampleFrequency">100.0</string><!-- Enable automatic Activity measurement --> <bool name="ga_autoActivityTracking">true</bool><!-- catch and report uncaught exceptions from the app --> <bool name="ga_reportUncaughtExceptions">true</bool><!-- How long a session exists before giving up --> <integer name="ga_sessionTimeout">-1</integer><!-- If ga_autoActivityTracking is enabled, an alternate screen name can be specified to substitute for the full length canonical Activity name in screen view hit. In order to specify an alternate screen name use an <screenName> element, with the name attribute specifying the canonical name, and the value the alias to use instead. --> <screenName name="com.mycompany.MyActivity">My Activity</screenName></resources> Step 3. Last in terms of config, modify your AndroidManifest.xml by adding the following line within the ‘application’ element. This configures the GoogleAnalytics class (a singleton whick controls the creation of Tracker instances) with the basic configuration in the res/xml/global_tracker.xml file. <!-- Google Analytics Version v4 needs this value for easy tracking --> <meta-data android:name="com.google.android.gms.analytics.globalConfigResource" android:resource="@xml/global_tracker" /> That’s all the basic xml configuration done. Step 4. We can now add (or modify) your application’s ‘Application’ class so it contains some Trackers that we can reference from our activity. package com.mycompany;import android.app.Application;import com.google.android.gms.analytics.GoogleAnalytics; import com.google.android.gms.analytics.Tracker;import java.util.HashMap;public class MyApplication extends Application {// The following line should be changed to include the correct property id. private static final String PROPERTY_ID = "UX-XXXXXXXX-X";//Logging TAG private static final String TAG = "MyApp";public static int GENERAL_TRACKER = 0;public enum TrackerName { APP_TRACKER, // Tracker used only in this app. GLOBAL_TRACKER, // Tracker used by all the apps from a company. eg: roll-up tracking. ECOMMERCE_TRACKER, // Tracker used by all ecommerce transactions from a company. }HashMap<TrackerName, Tracker> mTrackers = new HashMap<TrackerName, Tracker>();public MyApplication() { super(); }synchronized Tracker getTracker(TrackerName trackerId) { if (!mTrackers.containsKey(trackerId)) {GoogleAnalytics analytics = GoogleAnalytics.getInstance(this); Tracker t = (trackerId == TrackerName.APP_TRACKER) ? analytics.newTracker(R.xml.app_tracker) : (trackerId == TrackerName.GLOBAL_TRACKER) ? analytics.newTracker(PROPERTY_ID) : analytics.newTracker(R.xml.ecommerce_tracker); mTrackers.put(trackerId, t);} return mTrackers.get(trackerId); } } Either ignore the ECOMMERCE_TRACKER or create an xml file in res/xml called ecommerce_tracker.xml to configure it. I’ve left it in the code just to show its possible to have additional trackers besides APP and GLOBAL. There is a sample xml configuration file for the ecommerce_tracker in \extras\google\google_play_services\samples\analytics\res\xml but it simply contains the tracking_id property discussed earlier. Step 5. At last we can now add some actual hit tracking code to our activity. First, import the class com.google.android.gms.analytics.GoogleAnalytics and initialise the application level tracker in your activities onCreate() method. Do this in each activity you want to track. //Get a Tracker (should auto-report) ((MyApplication) getApplication()).getTracker(MyApplication.TrackerName.APP_TRACKER); Then, in onStart() record a user start ‘hit’ with analytics when the activity starts up. Do this in each activity you want to track. //Get an Analytics tracker to report app starts & uncaught exceptions etc. GoogleAnalytics.getInstance(this).reportActivityStart(this); Finally, record the end of the users activity by sending a stop hit to analytics during the onStop() method of our Activity. Do this in each activity you want to track. //Stop the analytics tracking GoogleAnalytics.getInstance(this).reportActivityStop(this); And Finally… If you now compile and install your app on your device and start it up, assuming you set ga_logLevel to verbose and ga_dryRun to false, in logCat you should see some of the following log lines confirming your hits being sent to Google Analytics. com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: connecting to Analytics service com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: connect: bindService returned false for Intent { act=com.google.android.gms.analytics.service.START cmp=com.google.android.gms/.analytics.service.AnalyticsService (has extras) } com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: Loaded clientId com.mycompany.myapp I/GAV3? Thread[GAThread,5,main]: No campaign data found. com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: Initialized GA Thread com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: putHit called ... com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: Dispatch running... com.mycompany.myapp V/GAV3? Thread[GAThread,5,main]: sent 1 of 1 hitsEven better, if you’re logged into the Google Analytics console’s reporting dashboard, on the ‘Real Time – Overview’ page, you may even notice the following…Reference: Working with Google Analytics API v4 for Android from our JCG partner Ben Wilcock at the Ben Wilcock’s blog blog....

Java EE CDI Qualifiers: Quick Peek

Qualifiers are the mainstay of type safety and loose coupling in Contexts and Dependency Injection (CDI). Why? Without CDI, we would be injecting Java EE components in a manner similar to below Note:This will actually not compile and is just a hypothetical code snippet           Example 1  Example 2What’s wrong with the above implementations?Not type safe – Uses a String to specify the fully qualified name of an implementation class (see Example 1) Tightly couples the BasicCustomerPortal class to the BasicService class (see Example 2)This is exactly why CDI does not do Injection this way ! Qualifiers help promoteLoose Coupling – An explicit class is not introduced within another. Detaches implementations from each other Strong Typing (type safety) – No String literals to define injection properties/metadata Qualifiers also serve asBinding components between beans and Decorators Event selectors for Observers (event consumers)  How to use Qualifiers?  CDI Qualifiers Simplified   Simplified stepsCreate a Qualifier Apply Qualifiers to different implementation classes Use the Qualifiers along with @Inject to inject the instance of the appropriate implementation within a classThis was not a detailed or in-depth post about CDI Qualifiers. It’s more of a quick reference.Click for source codeMore on CDIThe Specification page (CDI 1.2) Official CDI pageThanks for reading!Reference: Java EE CDI Qualifiers: Quick Peek from our JCG partner Abhishek Gupta at the Object Oriented.. blog....

Do not underestimate the power of the fun

Do you like your tools? Are you working with the technology, programming language and tools that you like? Are you having fun working with it? When a new project is being started, the company has to decide what technologies, frameworks and tools will be used to develop it. Most common sense factor to take into consideration is the tool’s ability to get the job done. However, especially in Java world, usually there is more than one tool  able to pass this test. Well, usually there are tens, if not hundreds of them. So another factors have to be used. The next important and also quite obvious one is how easy the tool is to use, and how fast can we get the job done with it. “Easy” is subjective, and “fast” depends strongly on the tool itself and the environment it is used in. Like the tool’s learning curve or the developers knowledge of it. While the developers knowledge of the tool usually is taken into account, their desire to work with it (or not), usually is not. Here I would like to convince you that it is really important too. Known ! = best There are cases where it’s better to choose cool tools instead of known ones. Yes, the developers need to learn it, and it obviously costs some time, but I believe it is an investment that pays off later. Especially if alternatives are the ones that the devs are experienced with, but don’t want to use any more. Probably there are some people who like to code in the same language and use the same frameworks for 10 years, but I don’t know many of them. Most of the coders I know like to learn new languages, use new frameworks, tools and libs. Sadly, some of them can’t do it because of corporate policies, customer’s requirements or other restrictions. Why I believe such an investment pays off? If you think developer writes 800 LOC/day, so 100 LOC/hour, so 10 LOC/minute… well, you’re wrong. Developers are not machines working with constant speed 9 to 5. Sometimes we are “in the zone”, coding like crazy (let’s leave the code quality aside), sometimes we are creative, working with pen and paper, inventing clever solutions, algorithms etc. and sometimes we are just bored, forcing ourselves to put 15th form on the page or write boilerplate code.The power of fun Now ask yourself, in which situation you (or your developers) are most often? And if you are often bored, working 5th year with the same technology and tools, think about the times when you were learning it. Remember when you were using it for the first time? Were you bored then? Or rather excited? Were you less productive? That’s truism, but we are not productive when we need to force ourselves to work. Maybe it’s a good idea to change your work to be more fun? Use some tools you don’t know (yet), but really want to try? It might seem you are going to be less productive, at least at the beginning, but is it really true? Moreover, if it allows you to write less boilerplate code or closures or anything else that can make you faster and more efficient in the long run, it seems a really good investment. There is one more advantage of cool and fun tools. If you are a company owner, do you want your business partners to consider your company expensive but very good, delivering high quality services and worth the price, or not-so-good but cheap? I don’t know any software company that wants the latter. We all want to be good – and earn more, but well deserved, money. Now think about good and best developers – where do they go? Do they choose companies where they have to work with old, boring tools and frameworks? Even when you pay them much, the best devs are not motivated by money. Probably you know it already. Good devs are the ones that like to learn and discover new stuff. There is no better way to learn new stuff than working with it. And there are not many things that are as fun for a geek as working with languages, technologies and tools they like.So, when choosing tools for your next project, take fun factor into account. Or even better – let the developers make the choice.Reference: Do not underestimate the power of the fun from our JCG partner Pawel Stawicki at the Java, the Programming, and Everything blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books