Hadoop Environment Setup

In this blog you can  go for Creating a user: It is recommended to create a separate user for Hadoop to isolate Hadoop file system from UNIX file system. Follow the steps given below to create a user: Open the root using the command “su”. Create a user from the root account using the command “useradd username”. Now you can open an existing user account using the command “su username”. Open the Linux terminal and type the following commands to create a user.
$ su password: # useradd asha24 # passwd asha24 New passwd: Retype new passwd
SSH setup is required to do different operations on a cluster such as starting, stopping, distributed daemon shell operations. To authenticate different users of Hadoop, it is required to provide public/private key pair for a Hadoop user and share it with different users.SSH Setup and Key Generation The following commands are used for generating a key-value pair using SSH. Copies the public keys form id_rsa.pub to authorized_keys, and provide the owner with read and write permissions to authorized_keys file respectively.
$ ssh-keygen -t rsa $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys $ chmod 0600 ~/.ssh/authorized_keys
Downloading Hadoop Download and extract Hadoop 2.6.5 from Apache software foundation using the following commands.
$ su password: # cd /usr/local # wget http://apache.claz.org/hadoop/common/hadoop-2.6.5/ hadoop-2.6.5.tar.gz # tar xzf hadoop-2.6.5.tar.gz # mv hadoop-2.6.5/* to hadoop/ # exit
Installing Hadoop in Standalone Mode Here we will discuss the installation of Hadoop 2.6.5 in standalone mode. There are no daemons running and everything runs in a single JVM. Standalone mode is suitable for running MapReduce programs during development, since it is easy to test and debug them. Setting up Hadoop You can set Hadoop environment variables by appending the following commands to ~/.bashrc file.
export HADOOP_HOME=/usr/local/hadoop
Before proceeding further, you need to make sure that Hadoop is working fine. Just issue the following command:
$ hadoop version
If everything is fine with your setup, then you should see the following result:
Hadoop 2.6.5 Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768 Compiled by hortonmu on 2017-02-13T14:06Z Compiled with protoc 2.5.0 From source with checksum 79e53ce7994d1628b240f09af91e1af4
It means your Hadoop’s standalone mode setup is working fine. By default, Hadoop is configured to run in a non-distributed mode on a single machine. Step 1: Create temporary content files in the input directory. You can create this input directory anywhere you would like to work. It will give the following files in your input directory:
total 24 -rw-r–r– 1 root root 14133 Feb 13 14:28 LICENSE.txt -rw-r–r– 1 root root   178   Feb 13  14:28 NOTICE.txt -rw-r–r– 1 root root  1945  Feb 13 14:28 README.txt
These files have been copied from the Hadoop installation home directory. For your experiment, you can have different and large sets of files. Step 2: Let’s start the Hadoop process to count the total number of words in all the files available in the input directory, as follows:
$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduceexamples-2.2.0.jar  wordcount input output
Step 3: Here, we will do the required processing and save the output in output/part-r00000 file, which you can check by using:
$cat output/*
It will list down all the words along with their total counts available in all the files available in the input directory.
“AS      4 “Contribution” 1 “Contributor” 1 “Derivative 1 “Legal 1 “License”      1 “License”);     1 “Licensor”      1 “NOTICE”        1 “Not      1 “Object”        1 “Source”        1 “Work”    1 “You”     1 “Your”)   1 “[]”      1 “control”       1 “printed        1 “submitted”     1 (50%)     1 (BIS),    1 (C)       1 (Don’t)   1 (ECCN)    1 (INCLUDING      2 (INCLUDING,     2 ………….
Installing Hadoop in Pseudo Distributed Mode Follow the steps given below to install Hadoop 2.4.1 in pseudo distributed mode. Step 1: Setting Up Hadoop You can set Hadoop environment variables by appending the following commands to ~/.bashrc file.
export HADOOP_HOME=/usr/local/hadoop export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin export HADOOP_INSTALL=$HADOOP_HOME
Now apply all the changes into the current running system.
$ source ~/.bashrc
Step 2: Hadoop Configuration You can find all the Hadoop configuration files in the location “$HADOOP_HOME/etc/hadoop”. It is required to make changes in those configuration files according to your Hadoop infrastructure.
$ cd $HADOOP_HOME/etc/hadoop
In order to develop Hadoop programs in java, you have to reset the java environment variables in hadoop-env.sh file by replacing JAVA_HOME value with the location of java in your system.
export JAVA_HOME=/usr/local/jdk 8u151
The following are the list of files that you have to edit to configure Hadoop.
core-site.xml
The core-site.xml file contains information such as the port number used for Hadoop instance, memory allocated for the file system, memory limit for storing the data, and size of Read/Write buffers. Open the core-site.xml and add the following properties in between, tags.
   fs.default.name hdfs://localhost:9000 hdfs-site.xml
The hdfs-site.xml file contains information such as the value of replication data, namenode path, and data node paths of your local file systems. It means the place where you want to store the Hadoop infrastructure. Let us assume the following data.
dfs.replication (data replication value) = 1 (In the below given path /hadoop/ is the user name. hadoopinfra/hdfs/namenode is the directory created by hdfs file system.) namenode path = //home/hadoop/hadoopinfra/hdfs/namenode (hadoopinfra/hdfs/datanode is the directory created by hdfs file system.) datanode path = //home/hadoop/hadoopinfra/hdfs/datanode
Open this file and add the following properties in between the tags in this file.
      dfs.replication 1 dfs.name.dir file:///home/hadoop/hadoopinfra/hdfs/namenode dfs.data.dir file:///home/hadoop/hadoopinfra/hdfs/datanode
Note: In the above file, all the property values are user-defined and you can make changes according to your Hadoop infrastructure.
yarn-site.xml
This file is used to configure yarn into Hadoop. Open the yarn-site.xml file and add the following properties in between the , tags in this file.
      yarn.nodemanager.aux-services mapreduce_shuffle mapred-site.xml
This file is used to specify which MapReduce framework which  we are using. By default, Hadoop contains a template of yarn-site.xml. First of all, it is required to copy the file from mapred-site.xml.template to mapred-site.xml file using the below  command.
$ cp mapred-site.xml.template mapred-site.xml
Open mapred-site.xml file and add the following properties in between the , tags in this file.
      mapreduce.framework.name yarn
Verifying Hadoop Installation The following steps are used to verify the Hadoop installation. Step 1: Name Node Setup Set up the namenode using the command “hdfs namenode -format” as follows.
$ cd ~ $ hdfs namenode -format
The expected result is as follows.
02/13/17 21:30:55 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG:   host = localhost/192.168.1.11 STARTUP_MSG:   args = [-format] STARTUP_MSG:   version = 2.6.5 … … 02/13/17 15:14:48 INFO common.Storage: Storage directory /home/hadoop/hadoopinfra/hdfs/namenode has been successfully formatted. 02/13/17 15:14:48 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 02/13/17 15:14:48 INFO util.ExitUtil: Exiting with status 0 02/13/17 15:14:48 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at localhost/192.168.1.11 ************************************************************/
Step 2: Verifying Hadoop dfs The below command is used to start dfs. Executing this command will start your Hadoop file system.
$ start-dfs.sh
The expected output is as bellow:
02/13/17 15:14:48 Starting namenodes on [localhost] localhost: starting namenode, logging to /home/hadoop/hadoop 2.6.5/logs/hadoop-hadoop-namenode-localhost.out localhost: starting datanode, logging to /home/hadoop/hadoop 2.6.5/logs/hadoop-hadoop-datanode-localhost.out Starting secondary namenodes [0.0.0.0]
Step 3: Verifying Yarn Script The below command is used to start the yarn script. Executing this command will start your yarn daemons.
$ start-yarn.sh
The expected output is as follows:
starting yarn daemons starting resourcemanager, logging to /home/hadoop/hadoop 2.6.5/logs/yarn-hadoop-resourcemanager-localhost.out localhost: starting nodemanager, logging to /home/hadoop/hadoop 2.6.5/logs/yarn-hadoop-nodemanager-localhost.out
Step 4: Accessing Hadoop on Browser The default port number to access Hadoop is 50070. Use the following URL to get Hadoop services on a browser.
http://localhost:50070/Accessing Hadoop on Browser
Step 5: Verify All Applications for Cluster The default port number to access all applications of the cluster is 8088. Use the following Url to visit this service.
http://localhost:8088/Hadoop Application Cluster
Nitesh

Nitesh

Author

Bonjour. A curious dreamer enchanted by various languages, I write towards making technology seem fun here at Asha24.