Crawling the Web with Cassandra and Nutch

So, you want to harvest a massive amount of data from the internet?  What better storage mechanism than Cassandra?  This is easy to do with Nutch.

Often people use Hbase behind Nutch.  This works, but it may not be an ideal solution if you are (or want to be) a Cassandra shop.   Fortunately, Nutch 2+ uses the Gora abstraction layer to access its data storage mechanism.  Gora supports Cassandra.  Thus, with a few tweaks to the configuration, you can use Nutch to harvest content directly into Cassandra.

We’ll start with Nutch 2.1…  I like to go directly from source:
 

 $ git clone https://github.com/apache/nutch.git -b 2.1
...
$ ant

After the build, you will have a nutch/runtime/local directory, which contains the binaries for execution.  Now let’s configure Nutch for Cassandra.

First we need to add an agent to Nutch by adding the following xml element to nutch/conf/nutch-site.xml:

<property>
 <name>http.agent.name</name>
 <value>My Nutch Spider</value>
</property>

Next we need to tell Nutch to use Gora Cassandra as its persistence mechanism. For that, we add the following element to nutch/conf/nutch-site.xml:

<property>
 <name>storage.data.store.class</name>
 <value>org.apache.gora.cassandra.store.CassandraStore</value>
 <description>Default class for storing data</description>
</property>

Next, we need to tell Gora about Cassandra.  Edit the nutch/conf/gora.properties file.  Comment out the SQL entries, and uncomment the following line:

gora.cassandrastore.servers=localhost:9160

Additionally, we need to add a dependency for gora-cassandra.  Edit the ivy/ivy.xml file and uncomment the following line:

<dependency org="org.apache.gora" name="gora-cassandra" rev="0.2" conf="*->default" />

Finally, we want to re-generate the runtime with the new configuration and the additional dependency.  Do this with the following ant command:

ant runtime

Now we are ready to run!

Create a directory called “urls”, with a file named seed.txt that contains the following line:

http://nutch.apache.org/

Next, update the regular expression url in conf/regex-urlfilter.txt to:

 +^http://([a-z0-9]*\.)*nutch.apache.org/ 

Now, crawl!

bin/nutch crawl urls -dir crawl -depth 3 -topN 5

That will harvest webpages to Cassandra!!

Let’s go look at the data model for a second…

You will notice that a new keyspace was created: webpage.  That keyspace contains three tables: f, p, and sc.

[cqlsh 2.3.0 | Cassandra 1.2.1 | CQL spec 3.0.0 | Thrift protocol 19.35.0]
Use HELP for help.
cqlsh> describe keyspaces;
system  webpage  druid  system_auth  system_traces
cqlsh> use webpage;
cqlsh:webpage> describe tables;
f  p  sc

Each of these tables is a pure key-value store.  To understand what is in each of them, take a look at the nutch/conf/gora-cassandra-mapping.xml file.  I’ve included a snippet below:

<field name="baseUrl" family="f" qualifier="bas"/>
<field name="status" family="f" qualifier="st"/>
<field name="prevFetchTime" family="f" qualifier="pts"/>
<field name="fetchTime" family="f" qualifier="ts"/>
<field name="fetchInterval" family="f" qualifier="fi"/>
<field name="retriesSinceFetch" family="f" qualifier="rsf"/>

From this mapping file, you can see what it puts in the table, but unfortunately the schema isn’t really conducive to exploration from the CQL prompt.  (I think there is room for improvement here)  It would be nice if there was a CQL friendly schema in place, but that may be difficult to achieve through gora.  Alas, that is probably the price of abstraction.

So, the easiest thing is to use the nutch tooling to retrieve the data.  You can extract data with the following command:

runtime/local/bin/nutch readdb -dump data -content

When that completes, go into the data directory and you will see the output of the Hadoop job that was used to extract the data.  We can then use this for analysis.

I really wish Nutch used a better schema for C*.   It would be fantastic if that data was immediately usable from within C*.  If someone makes that enhancement, please let me know!
 

Related Whitepaper:

Open Source Data Management for Big Data and NoSQL

Join Talend for this new on-demand webinar to show how data management can benefit your organization.

This on-demand webinar shows how Talend for Big Data greatly simplifies the process of working with Hadoop and NoSQL and makes Big Data integration easy, fast, and affordable.

Get it Now!  

Leave a Reply


6 − = four



Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books
Get tutored by the Geeks! JCG Academy is a fact... Join Now
Hello. Add your message here.