Course details

Getting Started with Hadoop: Filtering Data Using MapReduce

Getting Started with Hadoop: Filtering Data Using MapReduce


Overview/Description
Expected Duration
Lesson Objectives
Course Number
Expertise Level



Overview/Description

Extracting meaningful information from a very large dataset can be painstaking. In this Skillsoft Aspire course, learners examine how Hadoop's MapReduce can be used to speed up this operation. In a new project, code the Mapper for an application to count the number of passengers in each Titanic class in the input data set. Then develop a Reducer and Driver to generate final passenger counts in each Titanic class. Build the project by using Maven and run on Hadoop master node to check that output correctly shows passenger class numbers. Apply MapReduce to filter only surviving Titanic passengers from the input data set. Execute the application and verify that filtering has worked correctly; examine job and output files with YARN cluster manager and HDFS (Hadoop Distributed File System) NameNode web User interfaces. Using a restaurant app's data set, use MapReduce to obtain the distinct set of cuisines offered. Build and run the application and confirm output with HDFS from both command line and web application. The exercise involves filtering data by using MapReduce.



Expected Duration (hours)
1.0

Lesson Objectives

Getting Started with Hadoop: Filtering Data Using MapReduce

  • Course Overview
  • create a new project and code up the Mapper for an application to count the number of passengers in each class of the Titanic in the input dataset
  • develop a Reducer and Driver for the application to generate the final passenger counts in each class of the Titanic
  • build the project using Maven and run it on the Hadoop master node to check that the output correctly shows the numbers in each passenger class
  • apply MapReduce to filter through only the surviving passengers on the Titanic from the input dataset
  • execute the application and verify that the filtering has worked correctly; examine the job and the output files using the YARN Cluster Manager and HDFS NameNode web UIs
  • use MapReduce to obtain a distinct set of the cuisines offered by the restaurants in a dataset
  • build and run the application and confirm the output using HDFS from both the command line and the web application
  • identify configuration functions used to customize a MapReduce and recognize the types of input and output when null values are transmitted from the Mapper to the Reducer
  • Course Number:
    it_dshpfddj_03_enus

    Expertise Level
    Beginner