Home » Java » Enterprise Java » Lets Crunch big data

About Rahul Sharma

Lets Crunch big data

As developers our focus is on simpler, effective solutions and thus one of the most valued principle is “Keep it simple and stupid”. But with Hadoop map-reduce it was a bit hard to stick to this. If we are evaluating data in multiple Map Reduce jobs we would end up with code that is not related to business but more related to infra. Most of the non-trivial business data processing involves quite a few of map-reduce tasks. This means longer tread times and harder to test solutions.

Google presented solution to these issues in their FlumeJava paper. The same paper has been adapted in implementing Apache-Crunch. In a nutshell Crunch is a java library which simplifies development on MapReduce pipelines. It provides a bunch of lazily evaluated collections which can be used to perform various operations in form of map reduce jobs.

Here is what Brock Noland said in one of posts while introducing Crunch

Using Crunch, a Java programmer with limited knowledge of Hadoop and MapReduce can utilize the Hadoop cluster. The program is written in pure Java and does not require the use of MapReduce specific constructs such as writing a Mapper, Reducer, or using Writable objects to wrap Java primitives.

Crunch supports reading data from various sources like sequence files, avro, text , hbase, jdbc with a simple read API

<T> PCollection<T> read(Source<T> source)

You can import data in various formats like json, avro, thrift etc and perform efficient joins, aggregation, sort, cartesian and filter operations. Additionally any custom operations over these collections is quite easy to cook. All you have to do is to implement the quite simple and to the point, DoFn interface. You can unit test you implementations of DoFn without any map-reduce constructs.

I am not putting any example to use it. It is quite simple and the same can be found out on Apache-Crunch site.

Alternatively you could generate a project from the available crunch-archetype. This will also generate a simple WordCount example. The archetype can be selected using :

mvn archetype:generate -Dfilter=crunch-archetype

The project has quite a few examples for its different aspects and is also available in Scala.

So now lets CRUNCH some data !!!
 

Reference: Lets Crunch big data from our JCG partner Rahul Sharma at the The road so far… blog blog.

Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

6. Spring Interview Questions

7. Android UI Design

and many more ....

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*


eight − 8 =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Do you want to know how to develop your skillset and become a ...

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!
Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close