{"id":47999,"date":"2018-01-27T10:53:02","date_gmt":"2018-01-27T05:23:02","guid":{"rendered":"http:\/\/blog.odango.com\/?p=47999"},"modified":"2023-02-03T17:59:16","modified_gmt":"2023-02-03T12:29:16","slug":"hadoop-environment-setup","status":"publish","type":"post","link":"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/","title":{"rendered":"Hadoop Environment Setup"},"content":{"rendered":"\n[et_pb_section bb_built=&#8221;1&#8243; next_background_color=&#8221;#000000&#8243;][et_pb_row][et_pb_column type=&#8221;4_4&#8243;][et_pb_post_title _builder_version=&#8221;3.0.106&#8243; title=&#8221;on&#8221; meta=&#8221;off&#8221; author=&#8221;on&#8221; date=&#8221;on&#8221; categories=&#8221;on&#8221; comments=&#8221;on&#8221; featured_image=&#8221;off&#8221; featured_placement=&#8221;below&#8221; text_color=&#8221;dark&#8221; text_background=&#8221;off&#8221; title_font=&#8221;Titillium Web||||||||&#8221; title_font_size=&#8221;47&#8243; title_font_size_tablet=&#8221;40&#8243; title_font_size_phone=&#8221;35&#8243; title_font_size_last_edited=&#8221;on|desktop&#8221; title_text_color=&#8221;rgba(0,0,0,0.8)&#8221; title_text_align=&#8221;left&#8221; text_shadow_horizontal_length=&#8221;0.08em&#8221; text_shadow_vertical_length=&#8221;0.08em&#8221; text_shadow_blur_strength=&#8221;0.08em&#8221; title_text_shadow_horizontal_length=&#8221;0.08em&#8221; title_text_shadow_vertical_length=&#8221;0.08em&#8221; custom_margin=&#8221;|||10%&#8221; \/][\/et_pb_column][\/et_pb_row][et_pb_row][et_pb_column type=&#8221;4_4&#8243;][et_pb_text _builder_version=&#8221;3.12.2&#8243; text_font=&#8221;Titillium Web|300|||||||&#8221; text_font_size=&#8221;20&#8243; text_font_size_last_edited=&#8221;on|desktop&#8221; text_text_color=&#8221;rgba(0,0,0,0.8)&#8221; text_line_height=&#8221;1.6em&#8221; max_width=&#8221;800px&#8221; custom_margin=&#8221;|||10%&#8221; text_line_height_last_edited=&#8221;off|phone&#8221; max_width_last_edited=&#8221;off|phone&#8221; custom_margin_last_edited=&#8221;off|desktop&#8221;]\r\n\r\n<strong>In this blog you can\u00a0 go for Creating a user:<\/strong>\r\n\r\nIt is recommended to create a separate user for Hadoop to isolate <a href=\"https:\/\/asha24.net\/blog\/introduction-to-hadoop-distributed-file-system-hdfs\/\">Hadoop file system<\/a> from UNIX file system.\r\n\r\nFollow the steps given below to create a user:\r\n\r\nOpen the root using the command \u201csu\u201d.\r\n\r\nCreate a user from the root account using the command \u201cuseradd username\u201d.\r\n\r\nNow you can open an existing user account using the command \u201csu username\u201d.\r\n\r\nOpen the Linux terminal and type the following commands to create a user.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$ su\r\n\r\npassword:\r\n\r\n# useradd asha24\r\n\r\n# passwd asha24\r\n\r\nNew passwd:\r\n\r\nRetype new passwd<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nSSH setup is required to do different operations on a cluster such as starting, stopping, distributed daemon shell operations. To authenticate different users of Hadoop, it is required to provide public\/private key pair for a Hadoop user and share it with different users.<a href=\"https:\/\/en.wikipedia.org\/wiki\/Secure_Shell\">SSH Setup<\/a> and Key Generation\r\n\r\nThe following commands are used for generating a key-value pair using SSH. Copies the public keys form id_rsa.pub to authorized_keys, and provide the owner with read and write permissions to authorized_keys file respectively.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$ ssh-keygen -t rsa\r\n\r\n$ cat ~\/.ssh\/id_rsa.pub &gt;&gt; ~\/.ssh\/authorized_keys\r\n\r\n$ chmod 0600 ~\/.ssh\/authorized_keys<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<strong>Downloading Hadoop<\/strong>\r\n\r\nDownload and extract Hadoop 2.6.5 from Apache software foundation using the following commands.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$ su\r\n\r\npassword:\r\n\r\n# cd \/usr\/local\r\n\r\n# wget\u00a0http:\/\/apache.claz.org\/hadoop\/common\/hadoop-2.6.5\/\r\n\r\nhadoop-2.6.5.tar.gz\r\n\r\n# tar xzf hadoop-2.6.5.tar.gz\r\n\r\n# mv hadoop-2.6.5\/* to hadoop\/\r\n\r\n# exit<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<strong>Installing Hadoop in Standalone Mode<\/strong>\r\n\r\nHere we will discuss the <a href=\"http:\/\/hadoop.apache.org\/releases.html\">installation<\/a> of Hadoop 2.6.5 in standalone mode.\r\n\r\nThere are no daemons running and everything runs in a single JVM. Standalone mode is suitable for running MapReduce programs during development, since it is easy to test and debug them.\r\n\r\nSetting up Hadoop\r\n\r\nYou can set Hadoop environment variables by appending the following commands to ~\/.bashrc file.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>export HADOOP_HOME=\/usr\/local\/hadoop<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<strong>Before proceeding further, you need to make sure that Hadoop is working fine. Just issue the following command:<\/strong>\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$ hadoop version<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nIf everything is fine with your setup, then you should see the following result:\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>Hadoop 2.6.5\r\n\r\nSubversion\u00a0https:\/\/svn.apache.org\/repos\/asf\/hadoop\/common\u00a0-r 1529768\r\n\r\nCompiled by hortonmu on 2017-02-13T14:06Z\r\n\r\nCompiled with protoc 2.5.0\r\n\r\nFrom source with checksum 79e53ce7994d1628b240f09af91e1af4<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nIt means your Hadoop&#8217;s standalone mode setup is working fine. By default, Hadoop is configured to run in a non-distributed mode on a single machine.\r\n\r\n<strong>Step 1:\u00a0<\/strong>Create temporary content files in the input directory. You can create this input directory anywhere you would like to work.\r\n\r\n<strong>It will give the following files in your input directory:<\/strong>\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>total 24\r\n\r\n-rw-r&#8211;r&#8211; 1 root root 14133 Feb 13 14:28 LICENSE.txt\r\n\r\n-rw-r&#8211;r&#8211; 1 root root\u00a0\u00a0 178\u00a0\u00a0 Feb 13\u00a0 14:28 NOTICE.txt\r\n\r\n-rw-r&#8211;r&#8211; 1 root root\u00a0 1945\u00a0 Feb 13 14:28 README.txt<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nThese files have been copied from the Hadoop installation home directory. For your experiment, you can have different and large sets of files.\r\n\r\n<strong>Step 2:\u00a0<\/strong>Let&#8217;s start the Hadoop process to count the total number of words in all the files available in the input directory, as follows:\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$ hadoop jar $HADOOP_HOME\/share\/hadoop\/mapreduce\/hadoop-mapreduceexamples-2.2.0.jar\u00a0 wordcount input output<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<strong>Step 3:\u00a0<\/strong>Here, we will do the required processing and save the output in output\/part-r00000 file, which you can check by using:\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$cat output\/*<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nIt will list down all the words along with their total counts available in all the files available in the input directory.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>&#8220;AS\u00a0\u00a0\u00a0\u00a0\u00a0 4\r\n\r\n&#8220;Contribution&#8221; 1\r\n\r\n&#8220;Contributor&#8221; 1\r\n\r\n&#8220;Derivative 1\r\n\r\n&#8220;Legal 1\r\n\r\n&#8220;License&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;License&#8221;);\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;Licensor&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;NOTICE\u201d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;Not\u00a0\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;Object&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;Source\u201d\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;Work\u201d\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;You&#8221;\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;Your&#8221;)\u00a0\u00a0 1\r\n\r\n&#8220;[]&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;control&#8221;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;printed\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n&#8220;submitted&#8221;\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n(50%)\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n(BIS),\u00a0\u00a0\u00a0 1\r\n\r\n(C)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 1\r\n\r\n(Don&#8217;t)\u00a0\u00a0 1\r\n\r\n(ECCN)\u00a0\u00a0\u00a0 1\r\n\r\n(INCLUDING\u00a0\u00a0\u00a0\u00a0\u00a0 2\r\n\r\n(INCLUDING,\u00a0\u00a0\u00a0\u00a0 2\r\n\r\n&#8230;&#8230;&#8230;&#8230;.<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<strong>Installing Hadoop in\u00a0<\/strong><strong>Pseudo Distributed<\/strong><strong>\u00a0Mode<\/strong>\r\n\r\nFollow the steps given below to install Hadoop 2.4.1 in pseudo distributed mode.\r\n\r\n<strong>Step 1:<\/strong>\u00a0Setting Up Hadoop\r\n\r\nYou can set Hadoop environment variables by appending the following commands to ~\/.bashrc file.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>export HADOOP_HOME=\/usr\/local\/hadoop\r\n\r\nexport HADOOP_MAPRED_HOME=$HADOOP_HOME\r\n\r\nexport HADOOP_COMMON_HOME=$HADOOP_HOME\r\n\r\nexport HADOOP_HDFS_HOME=$HADOOP_HOME\r\n\r\nexport YARN_HOME=$HADOOP_HOME\r\n\r\nexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME\/lib\/native\r\n\r\nexport PATH=$PATH:$HADOOP_HOME\/sbin:$HADOOP_HOME\/bin\r\n\r\nexport HADOOP_INSTALL=$HADOOP_HOME<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nNow apply all the changes into the current running system.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$ source ~\/.bashrc<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nStep 2: Hadoop Configuration\r\n\r\nYou can find all the Hadoop <a href=\"http:\/\/hadoop.apache.org\/docs\/r2.6.4\/api\/org\/apache\/hadoop\/conf\/Configuration.html\">configuration<\/a> files in the location \u201c$HADOOP_HOME\/etc\/hadoop\u201d. It is required to make changes in those configuration files according to your Hadoop infrastructure.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$ cd $HADOOP_HOME\/etc\/hadoop<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nIn order to develop Hadoop programs in java, you have to reset the java environment variables in hadoop-env.sh file by replacing JAVA_HOME value with the location of java in your system.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>export JAVA_HOME=\/usr\/local\/jdk 8u151<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nThe following are the list of files that you have to edit to configure Hadoop.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>core-site.xml<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nThe core-site.xml file contains information such as the port number used for Hadoop instance, memory allocated for the file system, memory limit for storing the data, and size of Read\/Write buffers.\r\n\r\nOpen the core-site.xml and add the following properties in between, tags.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>\u00a0\u00a0 fs.default.name\r\n\r\nhdfs:\/\/localhost:9000\r\n\r\nhdfs-site.xml<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nThe <a href=\"https:\/\/asha24.net\/blog\/hdfs-commands-and-operations\/\">hdfs<\/a>-site.xml file contains information such as the value of replication data, namenode path, and data node paths of your local file systems. It means the place where you want to store the Hadoop infrastructure.\r\n\r\nLet us assume the following data.\r\n<table style=\"height: 313px;\" border=\"1\" width=\"457\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>dfs.replication (data replication value) = 1\r\n\r\n(In the below given path \/hadoop\/ is the user name.\r\n\r\nhadoopinfra\/hdfs\/namenode is the directory created by hdfs file system.)\r\n\r\nnamenode path = \/\/home\/hadoop\/hadoopinfra\/hdfs\/namenode\r\n\r\n(hadoopinfra\/hdfs\/datanode is the directory created by hdfs file system.)\r\n\r\ndatanode path = \/\/home\/hadoop\/hadoopinfra\/hdfs\/datanode<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nOpen this file and add the following properties in between the tags in this file.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>\u00a0\u00a0\u00a0\u00a0\u00a0 dfs.replication\r\n\r\n1\r\n\r\ndfs.name.dir\r\n\r\nfile:\/\/\/home\/hadoop\/hadoopinfra\/hdfs\/namenode\r\n\r\ndfs.data.dir\r\n\r\nfile:\/\/\/home\/hadoop\/hadoopinfra\/hdfs\/datanode<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nNote: In the above file, all the property values are user-defined and you can make changes according to your Hadoop infrastructure.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>yarn-site.xml<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nThis file is used to configure yarn into Hadoop. Open the yarn-site.xml file and add the following properties in between the , tags in this file.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>\u00a0\u00a0\u00a0\u00a0\u00a0 yarn.nodemanager.aux-services\r\n\r\nmapreduce_shuffle\r\n\r\nmapred-site.xml<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nThis file is used to specify which <a href=\"https:\/\/asha24.net\/blog\/introduction-to-mapreduce-in-big-data\/\">MapReduce<\/a> framework which \u00a0we are using. By default, Hadoop contains a template of yarn-site.xml. First of all, it is required to copy the file from mapred-site.xml.template to mapred-site.xml file using the below \u00a0command.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$ cp mapred-site.xml.template mapred-site.xml<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nOpen mapred-site.xml file and add the following properties in between the , tags in this file.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>\u00a0\u00a0\u00a0\u00a0\u00a0 mapreduce.framework.name\r\n\r\nyarn<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<strong>Verifying Hadoop Installation<\/strong>\r\n\r\nThe following steps are used to verify the Hadoop installation.\r\n\r\nStep 1: Name Node Setup\r\n\r\nSet up the namenode using the command \u201chdfs namenode -format\u201d as follows.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$ cd ~\r\n\r\n$ hdfs namenode -format<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nThe expected result is as follows.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>02\/13\/17 21:30:55 INFO namenode.NameNode: STARTUP_MSG:\r\n\r\n\/************************************************************\r\n\r\nSTARTUP_MSG: Starting NameNode\r\n\r\nSTARTUP_MSG:\u00a0\u00a0 host = localhost\/192.168.1.11\r\n\r\nSTARTUP_MSG:\u00a0\u00a0 args = [-format]\r\n\r\nSTARTUP_MSG:\u00a0\u00a0 version = 2.6.5\r\n\r\n&#8230;\r\n\r\n&#8230;\r\n\r\n02\/13\/17 15:14:48 INFO common.Storage: Storage directory\r\n\r\n\/home\/hadoop\/hadoopinfra\/hdfs\/namenode has been successfully formatted.\r\n\r\n02\/13\/17 15:14:48 INFO namenode.NNStorageRetentionManager: Going to\r\n\r\nretain 1 images with txid &gt;= 0\r\n\r\n02\/13\/17 15:14:48 INFO util.ExitUtil: Exiting with status 0\r\n\r\n02\/13\/17 15:14:48 INFO namenode.NameNode: SHUTDOWN_MSG:\r\n\r\n\/************************************************************\r\n\r\nSHUTDOWN_MSG: Shutting down NameNode at localhost\/192.168.1.11\r\n\r\n************************************************************\/<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nStep 2: Verifying Hadoop dfs\r\n\r\nThe below command is used to start dfs. Executing this command will start your Hadoop file system.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$ start-dfs.sh<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nThe expected output is as bellow:\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>02\/13\/17 15:14:48\r\n\r\nStarting namenodes on [localhost]\r\n\r\nlocalhost: starting namenode, logging to \/home\/hadoop\/hadoop\r\n\r\n2.6.5\/logs\/hadoop-hadoop-namenode-localhost.out\r\n\r\nlocalhost: starting datanode, logging to \/home\/hadoop\/hadoop\r\n\r\n2.6.5\/logs\/hadoop-hadoop-datanode-localhost.out\r\n\r\nStarting secondary namenodes [0.0.0.0]<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<strong>Step 3:\u00a0<\/strong>Verifying Yarn Script\r\n\r\nThe below command\u00a0is used to start the <a href=\"https:\/\/yarnpkg.com\/lang\/en\/docs\/cli\/run\/\">yarn script<\/a>. Executing this command will start your yarn daemons.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>$ start-yarn.sh<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<strong>The expected output is as follows:<\/strong>\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>starting yarn daemons\r\n\r\nstarting resourcemanager, logging to \/home\/hadoop\/hadoop\r\n\r\n2.6.5\/logs\/yarn-hadoop-resourcemanager-localhost.out\r\n\r\nlocalhost: starting nodemanager, logging to \/home\/hadoop\/hadoop\r\n\r\n2.6.5\/logs\/yarn-hadoop-nodemanager-localhost.out<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nStep 4: Accessing Hadoop on Browser\r\n\r\nThe default port number to access Hadoop is 50070. Use the following URL to get Hadoop services on a browser.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>http:\/\/localhost:50070\/Accessing\u00a0Hadoop on Browser<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\nStep 5: Verify All Applications for <a href=\"https:\/\/asha24.net\/blog\/multinode-cluster-installation-guide\/\">Cluster<\/a>\r\n\r\nThe default port number to access all applications of the cluster is 8088. Use the following Url to visit this service.\r\n<table border=\"1\" cellspacing=\"0\">\r\n<tbody>\r\n<tr>\r\n<td>http:\/\/localhost:8088\/Hadoop\u00a0Application Cluster<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section bb_built=&#8221;1&#8243; prev_background_color=&#8221;#000000&#8243;][et_pb_row][et_pb_column type=&#8221;4_4&#8243;][et_pb_team_member _builder_version=&#8221;3.0.106&#8243; name=&#8221;Nitesh&#8221; position=&#8221;Author&#8221; facebook_url=&#8221;&#8221; twitter_url=&#8221;&#8221; google_url=&#8221;&#8221; linkedin_url=&#8221;&#8221; background_layout=&#8221;light&#8221; body_font=&#8221;Titillium Web||||||||&#8221; body_font_size=&#8221;16&#8243; body_font_size_last_edited=&#8221;on|desktop&#8221; header_font=&#8221;Titillium Web|700|||||||&#8221; image_url=&#8221;https:\/\/asha24.net\/blog\/\/wp-content\/uploads\/2018\/03\/Nitesh.gif&#8221;]\r\n\r\nBonjour. A curious dreamer enchanted by various languages, I write towards making technology seem fun here at Asha24.\r\n\r\n[\/et_pb_team_member][\/et_pb_column][\/et_pb_row][\/et_pb_section]\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":4,"featured_media":48024,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[33],"tags":[],"class_list":["post-47999","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-big-data"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v17.0 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Hadoop Environment Setup (A step by step Guide)<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Hadoop Environment Setup (A step by step Guide)\" \/>\n<meta property=\"og:url\" content=\"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/\" \/>\n<meta property=\"og:site_name\" content=\"Asha24 Blog\" \/>\n<meta property=\"article:published_time\" content=\"2018-01-27T05:23:02+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-02-03T12:29:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/asha24.net\/blog\/wp-content\/uploads\/2018\/03\/hadoop-environment-setup-5.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"970\" \/>\n\t<meta property=\"og:image:height\" content=\"620\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Mahesh\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/asha24.net\/blog\/#website\",\"url\":\"https:\/\/asha24.net\/blog\/\",\"name\":\"Asha24 Blog\",\"description\":\"Dedication Towards Learning\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/asha24.net\/blog\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/asha24.net\/blog\/wp-content\/uploads\/2018\/03\/hadoop-environment-setup-5.jpg\",\"contentUrl\":\"https:\/\/asha24.net\/blog\/wp-content\/uploads\/2018\/03\/hadoop-environment-setup-5.jpg\",\"width\":970,\"height\":620},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/#webpage\",\"url\":\"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/\",\"name\":\"Hadoop Environment Setup (A step by step Guide)\",\"isPartOf\":{\"@id\":\"https:\/\/asha24.net\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/#primaryimage\"},\"datePublished\":\"2018-01-27T05:23:02+00:00\",\"dateModified\":\"2023-02-03T12:29:16+00:00\",\"author\":{\"@id\":\"https:\/\/asha24.net\/blog\/#\/schema\/person\/f6167b78bbaddfc399ae1154cd5b6759\"},\"breadcrumb\":{\"@id\":\"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/asha24.net\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Hadoop Environment Setup\"}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/asha24.net\/blog\/#\/schema\/person\/f6167b78bbaddfc399ae1154cd5b6759\",\"name\":\"Mahesh\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/asha24.net\/blog\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/085c03e75ffb51af2509c1cfad9c7d78b30236d43a008db2e46f96e2f40c67fc?s=96&d=wavatar&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/085c03e75ffb51af2509c1cfad9c7d78b30236d43a008db2e46f96e2f40c67fc?s=96&d=wavatar&r=g\",\"caption\":\"Mahesh\"},\"url\":\"https:\/\/asha24.net\/blog\/author\/mahesh\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Hadoop Environment Setup (A step by step Guide)","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/","og_locale":"en_US","og_type":"article","og_title":"Hadoop Environment Setup (A step by step Guide)","og_url":"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/","og_site_name":"Asha24 Blog","article_published_time":"2018-01-27T05:23:02+00:00","article_modified_time":"2023-02-03T12:29:16+00:00","og_image":[{"width":970,"height":620,"url":"https:\/\/asha24.net\/blog\/wp-content\/uploads\/2018\/03\/hadoop-environment-setup-5.jpg","path":"\/home\/reviews981\/public_html\/asha24.net\/blog\/wp-content\/uploads\/2018\/03\/hadoop-environment-setup-5.jpg","size":"full","id":48024,"alt":"","pixels":601400,"type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Written by":"Mahesh","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebSite","@id":"https:\/\/asha24.net\/blog\/#website","url":"https:\/\/asha24.net\/blog\/","name":"Asha24 Blog","description":"Dedication Towards Learning","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/asha24.net\/blog\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"ImageObject","@id":"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/#primaryimage","inLanguage":"en-US","url":"https:\/\/asha24.net\/blog\/wp-content\/uploads\/2018\/03\/hadoop-environment-setup-5.jpg","contentUrl":"https:\/\/asha24.net\/blog\/wp-content\/uploads\/2018\/03\/hadoop-environment-setup-5.jpg","width":970,"height":620},{"@type":"WebPage","@id":"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/#webpage","url":"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/","name":"Hadoop Environment Setup (A step by step Guide)","isPartOf":{"@id":"https:\/\/asha24.net\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/#primaryimage"},"datePublished":"2018-01-27T05:23:02+00:00","dateModified":"2023-02-03T12:29:16+00:00","author":{"@id":"https:\/\/asha24.net\/blog\/#\/schema\/person\/f6167b78bbaddfc399ae1154cd5b6759"},"breadcrumb":{"@id":"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/asha24.net\/blog\/hadoop-environment-setup\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/asha24.net\/blog\/hadoop-environment-setup\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/asha24.net\/blog\/"},{"@type":"ListItem","position":2,"name":"Hadoop Environment Setup"}]},{"@type":"Person","@id":"https:\/\/asha24.net\/blog\/#\/schema\/person\/f6167b78bbaddfc399ae1154cd5b6759","name":"Mahesh","image":{"@type":"ImageObject","@id":"https:\/\/asha24.net\/blog\/#personlogo","inLanguage":"en-US","url":"https:\/\/secure.gravatar.com\/avatar\/085c03e75ffb51af2509c1cfad9c7d78b30236d43a008db2e46f96e2f40c67fc?s=96&d=wavatar&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/085c03e75ffb51af2509c1cfad9c7d78b30236d43a008db2e46f96e2f40c67fc?s=96&d=wavatar&r=g","caption":"Mahesh"},"url":"https:\/\/asha24.net\/blog\/author\/mahesh\/"}]}},"_links":{"self":[{"href":"https:\/\/asha24.net\/blog\/wp-json\/wp\/v2\/posts\/47999","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/asha24.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/asha24.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/asha24.net\/blog\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/asha24.net\/blog\/wp-json\/wp\/v2\/comments?post=47999"}],"version-history":[{"count":6,"href":"https:\/\/asha24.net\/blog\/wp-json\/wp\/v2\/posts\/47999\/revisions"}],"predecessor-version":[{"id":52161,"href":"https:\/\/asha24.net\/blog\/wp-json\/wp\/v2\/posts\/47999\/revisions\/52161"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/asha24.net\/blog\/wp-json\/wp\/v2\/media\/48024"}],"wp:attachment":[{"href":"https:\/\/asha24.net\/blog\/wp-json\/wp\/v2\/media?parent=47999"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/asha24.net\/blog\/wp-json\/wp\/v2\/categories?post=47999"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/asha24.net\/blog\/wp-json\/wp\/v2\/tags?post=47999"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}