Home » Java » Enterprise Java » Apache Mahout: Getting started

About Andrey Redko

Andrey Redko
Andriy is a well-grounded software developer with more then 12 years of practical experience using Java/EE, C#/.NET, C++, Groovy, Ruby, functional programming (Scala), databases (MySQL, PostreSQL, Oracle) and NoSQL solutions (MongoDB, Redis).

Apache Mahout: Getting started

Recently I have got an interesting problem to solve: how to classify text from different sources using automation? Some time ago I read about a project which does this as well as many other text analysis stuff – Apache Mahout. Though it’s not a very mature one (current version is 0.4), it’s very powerful and scalable. Build on top of another excellent project, Apache Hadoop, it’s capable to analyze huge data sets.

So I did a small project in order to understand how Apache Mahout works. I decided to use Apache Maven 2 in order to manage all dependencies so I will start with POM file first.

<!--?xml version="1.0" encoding="UTF-8"?-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelversion>4.0.0</modelversion>
  <groupid>org.acme</groupid>
  <artifactid>mahout</artifactid>
  <version>0.94</version>
  <name>Mahout Examples</name>
  <description>Scalable machine learning library examples</description>
  <packaging>jar</packaging>

  <properties>
    <project.build.sourceencoding>UTF-8</project.build.sourceencoding>
    <apache.mahout.version>0.4</apache.mahout.version>
  </properties>
 
  <build>
    <plugins>
      <plugin>
        <groupid>org.apache.maven.plugins</groupid>
        <artifactid>maven-compiler-plugin</artifactid>
        <configuration>
          <encoding>UTF-8</encoding>
          <source>1.6
          <target>1.6</target>
          <optimize>true</optimize>
        </configuration>
      </plugin>
    </plugins>
  </build>

  <dependencies>
    <dependency>
      <groupid>org.apache.mahout</groupid>
      <artifactid>mahout-core</artifactid>
      <version>${apache.mahout.version}</version>
    </dependency>

    <dependency>
      <groupid>org.apache.mahout</groupid>
      <artifactid>mahout-math</artifactid>
      <version>${apache.mahout.version}</version>
    </dependency>

    <dependency>
      <groupid>org.apache.mahout</groupid>
      <artifactid>mahout-utils</artifactid>
      <version>${apache.mahout.version}</version>
    </dependency>


     <dependency>
      <groupid>org.slf4j</groupid>
      <artifactid>slf4j-api</artifactid>
      <version>1.6.0</version>
    </dependency>

    <dependency>
      <groupid>org.slf4j</groupid>
      <artifactid>slf4j-jcl</artifactid>
      <version>1.6.0</version>
    </dependency>
  </dependencies>
</project>

Then I looked into Apache Mahout examples and algorithms available for text classification problem. The most simple and accurate one is Naive Bayes classifier. Here is a code snippet:

package org.acme;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.FileReader;
import java.util.List;

import org.apache.hadoop.fs.Path;
import org.apache.mahout.classifier.ClassifierResult;
import org.apache.mahout.classifier.bayes.TrainClassifier;
import org.apache.mahout.classifier.bayes.algorithm.BayesAlgorithm;
import org.apache.mahout.classifier.bayes.common.BayesParameters;
import org.apache.mahout.classifier.bayes.datastore.InMemoryBayesDatastore;
import org.apache.mahout.classifier.bayes.exceptions.InvalidDatastoreException;
import org.apache.mahout.classifier.bayes.interfaces.Algorithm;
import org.apache.mahout.classifier.bayes.interfaces.Datastore;
import org.apache.mahout.classifier.bayes.model.ClassifierContext;
import org.apache.mahout.common.nlp.NGrams;

public class Starter {
 public static void main( final String[] args ) {
  final BayesParameters params = new BayesParameters();
  params.setGramSize( 1 );
  params.set( "verbose", "true" );
  params.set( "classifierType", "bayes" );
  params.set( "defaultCat", "OTHER" );
  params.set( "encoding", "UTF-8" );
  params.set( "alpha_i", "1.0" );
  params.set( "dataSource", "hdfs" );
  params.set( "basePath", "/tmp/output" );
  
  try {
      Path input = new Path( "/tmp/input" );
      TrainClassifier.trainNaiveBayes( input, "/tmp/output", params );
   
      Algorithm algorithm = new BayesAlgorithm();
      Datastore datastore = new InMemoryBayesDatastore( params );
      ClassifierContext classifier = new ClassifierContext( algorithm, datastore );
      classifier.initialize();
      
      final BufferedReader reader = new BufferedReader( new FileReader( args[ 0 ] ) );
      String entry = reader.readLine();
      
      while( entry != null ) {
          List< String > document = new NGrams( entry, 
                          Integer.parseInt( params.get( "gramSize" ) ) )
                          .generateNGramsWithoutLabel();

          ClassifierResult result = classifier.classifyDocument( 
                           document.toArray( new String[ document.size() ] ), 
                           params.get( "defaultCat" ) );          

          entry = reader.readLine();
      }
  } catch( final IOException ex ) {
   ex.printStackTrace();
  } catch( final InvalidDatastoreException ex ) {
   ex.printStackTrace();
  }
 }
}

There is one important note here: system must be taught before starting classification. In order to do so, it’s necessary to provide examples (more – better) of different text classification. It should be simple files where each line starts with category separated by tab from text itself. F.e.:

SUGGESTION  That's a great suggestion
QUESTION  Do you sell Microsoft Office?
...

More files you can provide, more precise classification you will get. All files must be put to the ‘/tmp/input’ folder, they will be processed by Apache Hadoop first. :)

Reference: Getting started with Apache Mahout from our JCG partner Andrey Redko at the Andriy Redko {devmind}.

Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

 

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

6. Spring Interview Questions

7. Android UI Design

 

and many more ....

 

 

10 comments

  1. Hi ,Nice tutorial
    I am able to run the code.I tested with sample file which contains
    QUESTION
    SUGGESTION

    series.and i gave a test file consisting of sentences of question and suggestion without any lable.
    In the ouput directory i get three folders of “trainer-tfIdf”,”trainer-thetaNormalizer”,”trainer-weights”

    how to see the output.

    can you please help

    • Hi Ali,

      Thank you for your comment. The variable ‘result’ of ‘ClassifierResult’ contains the classification (including scores) for particular text or message. You can print it out on a console or output to another file. Please note that at the time, the post targeted version 0.4 of Apache Mahout. Current version is 0.7 and unfortunately those are not compatible at all.

      Please let me know if it’s helpful.
      Thank you.

      Best Regards,
      Andriy Redko

  2. Thanks for sharing this .

  3. Hi,

    Q1 .Does the above algorithm work on a distributed framework ? ( Assuming that we are keeping the input file in hdfs )
    Q2. Is the output folder referred here in hdfs ?
    Q3. I don’t see any map-reduce code here , so shall i assume it’s only hdfs applied here but no parallel processing because on map reduce codes are written here.

    Regards,
    Aparnesh

  4. Thank you for sharing the example. I am new to Apache Mahout. I tried to use your code in my environment, but I am facing issues. I know that the post is old,and you may not reply to my query. But I am writing as I am stuck.

    I tried to configure Maven in my environment, but due to company policy, I am not able to configure it successfully, so I decided to resolve the dependencies, and I installed all the required dependencies one by one. But I am not able to find one last library, I tried looking on internet fro 2 days, but no luck. May be you can help me out

    I am using Eclipse STS. I did try with Eclipse Mars2, but same problem. I have installed following set of libraries

    commons-cli-2.0.jar
    google-collection-1.0.jar
    hadoop-0.20.1-core.jar
    log4j-1.2.13.jar
    mahout-core0.2-source.jar
    mahout-core0.3-source.jar
    mahout-core0.4-job.jar
    mahout-core0.7.jar
    mahout-core0.8.jar
    mahout-math-0.8.jar
    mahout-utils-0.5.jar
    slf4j-api-1.6.1.jar
    slf4j-log4j12-16.1.jar
    JRE System Library(JavaSE-1.8

    I have tried various permutation combination for libraries hoping to get my work done. Unfortunatly, its not happening, May be you can help me out.

    Below are the error messages:-

    Exception in thread “main” java.lang.NoSuchMethodError: org.apache.mahout.common.HadoopUtil.overwriteOutput(Lorg/apache/hadoop/fs/Path;)V
    at org.apache.mahout.classifier.bayes.mapreduce.bayes.BayesDriver.runJob(BayesDriver.java:39)
    at org.apache.mahout.classifier.bayes.TrainClassifier.trainNaiveBayes(TrainClassifier.java:54)
    at Starter.main(Starter.java:42)

    I tried looking for “org.apache.mahout.common.HadoopUtil.overwriteOutput” all over the internet, but I fail to get them, there are libraries with the name “org.apache.mahout.common.HadoopUtil”, but it doesnpt contain the required sub library.

    Please help

    • Hi Amitesh,

      The exceptions like this are an indication of Hadoop version mismatch, unfortunately. I would suggest you to look at the recent Apache Mahout documentation (https://mahout.apache.org/), a LOT of things changed since the blog post was published … The good news are that you may get the desired results much, much faster :)

      Thank you.

      Best Regards,
      Andriy Redko

      • Thank you for your reply,and I will be looking for the version part for sure. However, I would like to bring a point to you notice that my Eclipse is on windows,and my Mahout is installed on Linux, I didn’t run the code yet on my mahout box, I ran executed the code inside my eclipse on windows machine. Since I was facing issues, so I did not touch linux untill now.

        • Hi Amitesh,

          Yes, I understand that you run everything from your Eclipse. The issue though is still caused by Java libraries. I see at least mahout-core0.7.jar and mahout-core0.8.jar in the list, which are conflicting versions. For the example of the article you need 0.7 only. Thank you.

          Best Regards,
          Andriy Redko

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Want to take your Java skills to the next level?

Grab our programming books for FREE!

Here are some of the eBooks you will get:

  • Advanced Java Guide
  • Java Design Patterns
  • JMeter Tutorial
  • Java 8 Features Tutorial
  • JUnit Tutorial
  • JSF Programming Cookbook
  • Java Concurrency Essentials