Apache Hadoop Tutorials

Apache Hadoop Tutorials

In this detailed Resource page, we feature an abundance of Apache Hadoop Tutorials!

Apache Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Originally designed for computer clusters built from commodity hardware—still the common use—it has also found use on clusters of higher-end hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.

The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File System (HDFS), and a processing part which is a MapReduce programming model. Hadoop splits files into large blocks and distributes them across nodes in a cluster. It then transfers packaged code into nodes to process the data in parallel. This approach takes advantage of data locality, where nodes manipulate the data they have access to. This allows the dataset to be processed faster and more efficiently than it would be in a more conventional supercomputer architecture that relies on a parallel file system where computation and data are distributed via high-speed networking.

The base Apache Hadoop framework is composed of the following modules:

  • Hadoop Common – contains libraries and utilities needed by other Hadoop modules;
  • Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster;
  • Hadoop YARN – introduced in 2012 is a platform responsible for managing computing resources in clusters and using them for scheduling users’ applications;
  • Hadoop MapReduce – an implementation of the MapReduce programming model for large-scale data processing.

The term Hadoop has come to refer not just to the aforementioned base modules and sub-modules, but also to the ecosystem, or collection of additional software packages that can be installed on top of or alongside Hadoop, such as Apache Pig, Apache Hive, Apache HBase, Apache Phoenix, Apache Spark, Apache ZooKeeper, Cloudera Impala, Apache Flume, Apache Sqoop, Apache Oozie, and Apache Storm.

Apache Hadoop’s MapReduce and HDFS components were inspired by Google papers on their MapReduce and Google File System.

The Hadoop framework itself is mostly written in the Java programming language, with some native code in C and command line utilities written as shell scripts. Though MapReduce Java code is common, any programming language can be used with “Hadoop Streaming” to implement the “map” and “reduce” parts of the user’s program. Other projects in the Hadoop ecosystem expose richer user interfaces.

Note
If you wish to build up your Apache Hadoop knowledge first, check out our Apache Hadoop Tutorial – The ULTIMATE Guide.

Apache Hadoop Tutorials – Getting Started

Simple examples based on the Apache Hadoop

  • Prerequisites for Learning Hadoop
    In this article, we will dig deep to understand what are the prerequisites of learning and working with Hadoop. We will see what are the required things and what are the industry standard suggested things to know before you start learning Hadoop.
  • Is Hadoop a database?
    In this article we will try to address the one of the most asked question by beginners in the Apache Hadoop and Big Data ecosystem.
  • The Hadoop Ecosystem Explained
    In this article, we will go through the Hadoop Ecosystem and will see of what it consists and what does the different projects are able to do.
  • How Does Hadoop Work
    Understanding how Hadoop works under the hood is important if you want to be comfortable with the whole Hadoop ecosystem. Understanding how Hadoop works under the hood is important if you want to be comfortable with the whole Hadoop ecosystem.
  • Hadoop Hello World Example
    In this post, we feature a comprehensive Hadoop Hello World Example. Hadoop is an Apache Software Foundation project. It is the open source version inspired by Google MapReduce and Google File System.
  • Hadoop High Availability Tutorial
    In this tutorial, we will have a look at the High Availability feature of the Apache Hadoop Cluster. High Availability is one of the most important feature which is needed especially when the cluster is in production state. We do not want any single failure to make the whole cluster unavailable, so this is when High Availability of Hadoop comes into play.
  • Apache Hadoop Administration Tutorial
    In this tutorial, we will look into the administration responsibilities and how to administer the Hadoop Cluster.

Apache Hadoop Tutorials – Functions

Learn the most famous functionalities and operations of the Apache Hadoop

  • Apache Hadoop Zookeeper Example
    In this example, we will explore Apache Zookeeper, starting with the introduction and then followed by the steps to setup the Zookeeper and to get it up and running.
  • Hadoop Mapreduce Combiner Example
    In this example, we will learn about Hadoop Combiners. Combiners are highly useful functions offered by Hadoop especially when we are processing large amount of data. We will understand the combiners using a simple question.
  • Hadoop Mapper Example
    In this example, we will discuss and understand Hadoop Mappers, which is the first half of the Hadoop MapReduce Framework. Mappers are the most evident part of any MapReduce application and a good understanding of Mappers is required for taking full advantage of the MapReduce capabilities.
  • Hadoop CopyFromLocal Example
    In this example, we will understand the CopyFromLocal API of Hadoop MapReduce and various ways it can be used in the applications and maintenance of the clusters.
  • Hadoop Streaming Example
    In this example, we will dive into the streaming component of Hadoop MapReduce. We will understand the basics of Hadoop Streaming and see an example using Python.
  • Hadoop Oozie Example
    In this example, we will learn about Oozie which is a Hadoop Ecosystem Framework to help automate the process of work scheduling on Hadoop clusters.
  • Apache Hadoop RecordReader Example
    In this example,we will have a look at and understand the concept of RecordReader component of Apache Hadoop. But before digging into the example code, we would like look at the theory behind the InputStream and RecordReader to better understand the concept.
  • Apache Hadoop FS Commands Example
    In this example, we will go through most important commands which you may need to know to handle Hadoop File System(FS).
  • Apache Hadoop Distcp Example
    In this example, we are going to show you how to copy large files in inter/intra-cluster setup of Hadoop using distributed copy tool.
  • Apache Hadoop Distributed Cache Example
    In this example article, we will go through Apache Hadoop Distributed Cache and will understand how to use it with MapReduce Jobs.
  • Apache Hadoop Distributed File System Explained
    In this example, we will discuss Apache Hadoop Distributed File System(HDFS), its components and the architecture in detail. HDFS is one of the core components of Apache Hadoop ecosystem also.
  • Hadoop Sequence File Example
    In the article we will have a look at Hadoop Sequence file format. Hadoop Sequence Files are one of the Apache Hadoop specific file formats which stores data in serialized key-value pair. We have look into details of Hadoop Sequence File in the subsequent sections.
  • Hadoop Getmerge Example
    In this example, we will look at merging the different files into one file in HDFS (Hadoop Distributed File System) and Apache Hadoop. Specifically the getmerge command.
  • Hadoop Hbase Maven Example
    In this article, we will learn about using Maven for including Hbase in your Apache Hadoop related applications and how Maven makes it easy to write the Java Hbase applications with the repositories.
  • Apache Hadoop Cluster Setup Example (with Virtual Machines)
    Virtual Machines comes to rescue here. Using multiple Virtual Machines we can setup Hadoop Cluster using a single system. So, in this example, we will discuss how to setup Apache Hadoop Cluster using Virtual Machines.
  • Apache Hadoop Wordcount Example
    In this example, we will demonstrate the Word Count example in Hadoop. Word count is the basic example to understand the Hadoop MapReduce paradigm in which we count the number of instances of each word in an input file and gives the list of words and the number of instances of the particular word as an output.

Apache Hadoop Tutorials – Integrations

Learn how to use Apache Hadoop with other technologies

  • Apache Hadoop as a Service Options
    In this article, we will have a look at the available option for making use of Hadoop as a service aka HDaaS. Implementing Hadoop Cluster on own/in-house infrastructure is a complex task in itself and need a dedicated and expert team.
  • How to Install Apache Hadoop on Ubuntu
    In this example, we will see the details of how to install Apache Hadoop on an Ubuntu system. The example will describe all the required steps for installing a single-node Apache Hadoop cluster on Ubuntu 15.10. Hadoop is a framework for distributed processing of application on large clusters of commodity hardware.
  • Apache Hadoop Knox Tutorial
    In this tutorial, we will learn about Apache Knox. Knox provides the REST API Gateway for the Apache Hadoop Ecosystem. We will go through the basics of Apache Knox in the following sections.
  • Apache Hadoop Nutch Tutorial
    In this tutorial, we will go through and introduce another component of the Apache Hadoop ecosystem that is Apache Nutch. Apache Nutch is a Web crawler which takes advantage of the distributed Hadoop ecosystem for crawling data.
  • Apache Hadoop Hue Tutorial
    In this tutorial, we will learn about Hue. This will be the basic tutorial to start understanding what Hue is and how it can be used in the Hadoop and Big Data Ecosystem.
  • Apache Hadoop Hive Tutorial
    In this example, we will understand what Apache Hive is, where it is used, basics of Apache Hive, its data types and basic operations.
  • Hadoop Kerberos Authentication Tutorial
    In this tutorial we will see how to secure the Hadoop Cluster and implement authentication in the cluster. Kerberos is an authentication implementation which is a standard used to implement security in the Hadoop cluster.
  • Spring for Apache Hadoop 2.0 M5
    Spring has happily announced the Spring for Apache Hadoop 2.0 M5 milestone releases, while they are also getting much closer to a release candidate. In the Spring blog there is a good comparison between the new version 2.0 and the 1.0 version.
  • Apache Hadoop 2.4.0
    The Apache community has voted to release Apache Hadoop 2.4.0, so the new release is now available and consists of important improvements. The improvements are related not only to HDFS but also to MapReduce.
  • How Hadoop Works? HDFS case study
    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
  • Spring meets Apache Hadoop
    Spring for Apache Hadoop was born to resolve the issue of having poorly constructed Hadoop applications, which usually consist of command line utilities, scripts and pieces of code stitched together. It provides a consistent programming and configuration model across a wide range of Hadoop ecosystem projects, as expected from a Spring project.
  • Big Data Hadoop Tutorial for Beginners
    This tutorial is for the beginners who want to start learning about Big Data and Apache Hadoop Ecosystem. This tutorial gives the introduction of different concepts of Big Data and Apache Hadoop which will set the base foundation for further learning.
  • Difference Between Bigdata and Hadoop
    In this article, we will understand the very basic question which the beginners in the field of Big Data have. That is What is the difference between Big Data and Apache Hadoop.
  • A SMALL cross-section of BIG Data
    Big data is a term applied to data sets whose size is beyond the ability of commonly used software tools to capture, manage, and process the data within a tolerable elapsed time. Big data sizes are a constantly moving target currently ranging from a few dozen terabytes to many petabytes of data in a single data set.
  • Big Data analytics with Hive and iReport
    In this article we will set up a Hive Server, create a table, load it with data from a text file and then create a Jasper Report using iReport. The Jasper Report executes an SQL query on the Hive Server that is then translated to a MapReduce job executed by Hadoop.

[undereg]

Back to top button