How to change address 'hadoop jar' command is connecting to? -


i have been trying start mapreduce job on cluster following command:

bin/hadoop jar myjar.jar mainclass /user/hduser/input /user/hduser/output 

but following error on , on again, until connection refused:

13/08/08 00:37:16 info ipc.client: retrying connect server: localhost/127.0.0.1:54310. tried 5 time(s); retry policy retryuptomaximumcountwithfixedsleep(maxretries=10, sleeptime=1 seconds) 

i checked netstat see if service listening correct port:

~> sudo netstat -plten | grep java tcp        0      0 10.1.1.4:54310          0.0.0.0:*               listen      10022      38365       11366/java tcp        0      0 10.1.1.4:54311          0.0.0.0:*               listen      10022      32164       11829/java 

now notice service listening port 10.1.1.4:54310, ip of master, seems 'hadoop jar' command connecting 127.0.0.1 (the localhost, same machine) therefore doesn't find service. there anyway force 'hadoop jar' @ 10.1.1.4 instead of 127.0.0.1?

my namenode, datanode, jobtracker, tasktracker, ... running. checked datanode , tasktracker on slaves , seems working. can check webui on master , shows cluster online.

i expect problem dns related since seems 'hadoop jar' command finds correct port, uses 127.0.0.1 address instead of 10.1.1.4

update

configuration in core-site.xml

<configuration>  <property>   <name>hadoop.tmp.dir</name>   <value>/app/hadoop/tmp</value>   <description>a base other temporary directories.</description> </property>  <property>   <name>fs.default.name</name>   <value>hdfs://master:54310</value>   <description>the name of default file system.  uri   scheme , authority determine filesystem implementation.    uri's scheme determines config property (fs.scheme.impl) naming   filesystem implementation class.  uri's authority used   determine host, port, etc. filesystem.</description> </property>  </configuration> 

configuration in mapred-site.xml

<configuration>  <property>   <name>mapred.job.tracker</name>   <value>master:54311</value>   <description>the host , port mapreduce job tracker runs   at.  if "local", jobs run in-process single map   , reduce task.   </description> </property>  </configuration> 

configuration in hdfs-site.xml

<configuration>  <property>   <name>dfs.replication</name>   <value>1</value>   <description>default block replication.   actual number of replications can specified when file created.   default used if replication not specified in create time.   </description> </property>  </configuration> 

although seemed dns issue, hadoop trying resolve reference localhost in code. deploying jar of else , assumed correct. upon further inspection found reference localhost , changed master, solving issue.


Comments

Popular posts from this blog

image - ClassNotFoundException when add a prebuilt apk into system.img in android -

I need to import mysql 5.1 to 5.5? -

Java, Hibernate, MySQL - store UTC date-time -