- sudo wget -O /etc/apt/sources.list.d/bigtop.listhttp://www.apache.org/dist/incubator/bigtop/bigtop-0.3.0-incubating/repos/ubuntu/bigtop.list
- sudo gedit /etc/apt/sources.list.d/bigtop.list
uncomment the mirror link near by. The first link worked for me.
deb http://apache.01link.hk/incubator/bigtop/stable/repos/ubuntu/ bigtop contrib
sudo apt-cache search hadoop
|Search in the apt cache|
Step 5: Set your JAVA_HOME
export $JAVA_HOME in ~/.bashrc
Step 6: Installing the complete Hadoop stack
sudo apt-get install hadoop\*
Step 1: Formatting the namendoe
sudo -u hdfs hadoop namenode -format
|Formatting the namenode|
Step 2: Starting the Namenode, Datanode, Jobtracker, Tasktracker of Hadoop
for i in hadoop-namenode hadoop-datanode hadoop-jobtracker hadoop-tasktracker ; do sudo service $i start ; done
Now, the cluster is up and running.
|Start all the services|
|Create a directory in HDFS|
Step 4: List the directories in file system
hadoop fs -lsr /
Step 5: Running a sample pi example
hadoop jar /usr/lib/hadoop/hadoop-examples.jar pi 10 1000
|Running a sample program|
Enjoy with your cluster! :) We shall see what more blending could be done with Hadoop (with Hive, Hbase, etc.) in the next post! Until then, Happy Learning!! :):)
Reference: Hadoop Hangover : Introduction To Apache Bigtop and Playing With It (Installing Hadoop)! from our JCG partner Swathi V at the * Techie(S)pArK * blog.
Join Talend for this new on-demand webinar to show how data management can benefit your organization.
This on-demand webinar shows how Talend for Big Data greatly simplifies the process of working with Hadoop and NoSQL and makes Big Data integration easy, fast, and affordable.