How install alluxio1.2 on openstack - installation

I try to install alluxio1.2 on a VM centos on openstack with spark and hdfs but the installation doesn't works. Spark and hdfs are already install and work
ERROR logger.type (AlluxioMaster.java:main) - Uncaught exception while running Alluxio master, stopping it and exiting.
java.lang.RuntimeException: java.net.BindException: Address already in use
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at alluxio.web.UIWebServer.startWebServer(UIWebServer.java:164)
at alluxio.master.AlluxioMaster.startServingWebServer(AlluxioMaster.java:467)
at alluxio.master.AlluxioMaster.startServing(AlluxioMaster.java:452)
at alluxio.master.AlluxioMaster.startServing(AlluxioMaster.java:447)
at alluxio.master.AlluxioMaster.start(AlluxioMaster.java:389)
at alluxio.master.AlluxioMaster.main(AlluxioMaster.java:86)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at alluxio.web.UIWebServer.startWebServer(UIWebServer.java:154)
... 5 more
Are there a special installation to install alluxio on one openstack machine ?

It looks like the Alluxio master's web UI cannot start because the address is already in use. This happens if the port is taken by another process. Alluxio web UI uses the port 19999 for the web UI by default. If you expect another process to be using that port, you can change the Alluxio master web UI port by changing the configuration parameter (http://www.alluxio.org/docs/master/en/Configuration-Settings.html#master-configuration), alluxio.master.web.port, to another port number.

Related

Change server location for HDFS

I'm trying to follow the tutorial here: https://www.quickprogrammingtips.com/big-data/how-to-install-hadoop-on-mac-os-x-el-capitan.html, but getting a strange error when trying to run the line
sbin/start-dfs.sh
It doesn't raise any complaints when I run the script, but the namenode is not actually started. When I went to inspect the logs, I saw this error:
2020-01-30 13:30:52,700 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: censoredsite.com:0
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:995)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:171)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:834)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:692)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:898)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:877)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1603)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1671)
Caused by: java.net.BindException: Can't assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:990)
Which was preceded by this line earlier:
2020-01-30 13:30:52,359 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://censoredsite.com/archive:50070
It seems that somehow the web-server for HDFS has been set to something that it shouldn't be, I searched around online but I couldn't find what this value should properly be (I assume localhost?) OR how to actually change it in the config files.
The other interesting thing is that this "censoredsite" is actually a uh... lewd site I used to visit a few years ago. I have absolutely no idea how it managed to get into my HDFS configuration details, pretty worrying that it somehow worked its way into my computer. Does anyone now how to explicitly change the location of org.apache.hadoop.hdfs.DFSUtil? Thanks.
It sounds like it ended up in your /etc/hosts file as a site mapping...
The way to change the address, though, is in the hdfs-site.xml
dfs.namenode.http-address
https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
Alternatively, install Hadoop in a VM or download the Cloudera quickstart ones, where it's all pre-configured

failed to launch apache.spark.master

Whenever i run start-master.sh command on my local machine i am getting following error please someone help me to fix this issue
Terminal Error
Error which i get in terminal
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark-2.0.1-bin-hadoop2.6/logs/spark-andani-org.apache.spark.deploy.master.Master-1-andani.sakha.com.out
failed to launch org.apache.spark.deploy.master.Master:
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)
Log Error
If i check the spark log file following is the error
Exception in thread "main" java.net.BindException: Cannot assign requested address: Service 'sparkMaster' failed after 16 retries! Consider explicitly setting the appropriate port for the service 'sparkMaster' (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries.
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:748)
The error is due to your sparkMaster was not able to contact to your internal IP
Once check your /etc/hosts file weather it pointing to proper host name or your previous IP address might have changed.
Reconfigure it and run the command once again.

Mapreduce returns error when accessing to datanode on master machine

I set up a Hadoop 2.4.0 cluster with three machines. One master machine is deployed with namenode, resource manager, datanode and node manager. The other two worker machines are deployed with datanode and node manager. When I run Hive query, the work fails and the error is
2014-06-11 13:40:13,364 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.net.ConnectException: Call From master/127.0.0.1 to
master:43607 failed on connection exception: java.net.ConnectException: Connection >refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:5>7)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImp>l.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:231)
at com.sun.proxy.$Proxy9.getTask(Unknown Source)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:136)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 4 more
if I disable the datanode on master machine, everything works well. I'm wondering if it's allowed to deployed datanode on the master machine. Thank you for your kindly help in advance.
BTW, my /etc/hosts on the three machines are the same:
127.0.0.1 localhost
10.1.154.231 master
10.1.153.220 slave1
10.1.153.133 slave2
Please set up passwordless ssh on your master to itself.
You can achieve this by
cat ~/id_rsa.pub >> ~/.ssh/authorized_keys2
Make sure the permissions are correct
chmod 0600 ~/.ssh/authorized_keys2
In this case you may check if the namenode is started correctly on the master by checking logs at your yourhadoopfolder/logs/hadoop-[hadoop-user]-namenode-master.log
It is often caused by the hdfs is not formatted before. Run
hadoop namenode -format
Of course you will need to put your data to the cluster again.

Giraph ZooKeeper port problems

I am trying to run the SimpleShortestPathsVertex (aka SimpleShortestPathComputation) example described in the Giraph Quick Start. I am running this on a Hortonworks Sandbox instance (HDP 2.1) using VirtualBox, and I packaged giraph.jar using profile hadoop_2.0.0.
When I try to run the example using
hadoop jar giraph.jar org.apache.giraph.GiraphRunner
org.apache.giraph.examples.SimpleShortestPathsVertex -vif
org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat -vip
/user/hue/tinygraph.txt -of org.apache.giraph.io.formats.IdWithValueTextOutputFormat
-op /user/hue/output/shortestpaths -w 1
I get the following exception
2014-04-30 07:22:15,390 INFO [main] org.apache.giraph.zk.ZooKeeperManager: onlineZooKeeperServers: Connect attempt 0 of 10 max trying to connect to sandbox.hortonworks.com:22181 with poll msecs = 3000
2014-04-30 07:22:15,396 WARN [main] org.apache.giraph.zk.ZooKeeperManager: onlineZooKeeperServers: Got ConnectException
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at org.apache.giraph.zk.ZooKeeperManager.onlineZooKeeperServers(ZooKeeperManager.java:701)
at org.apache.giraph.graph.GraphTaskManager.startZooKeeperManager(GraphTaskManager.java:357)
at org.apache.giraph.graph.GraphTaskManager.setup(GraphTaskManager.java:188)
at org.apache.giraph.graph.GraphMapper.setup(GraphMapper.java:60)
at org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:90)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
I have found a work around - it seems that Giraph expects ZooKeeper to be running on port 22181, while it is actually running on 2181. I have simply used the Ambari interface to set ZooKeeper to run on 22181 (go to http://127.0.0.1:8080/, login admin/admin, Services tab, ZooKeeper and change the port to 22181, save and Service Actions -> Restart All.
Does anyone have a better solution for this problem? Is there a config via which the port should be specified, or is this port in the Giraph source code a typo?
Yes, you can specify each time you run a Giraph job via using option -Dgiraph.zkList=localhost:2181
Also you can set it up in Hadoop configs and then you don't have to pass on this option each time you submit a Giraph job. For that add the following line in conf/core-site.xml file :
<property><name>giraph.zkList</name><value>localhost:2181</value></property>
[Please check the syntax, I don't recall it on top my head and currently I don't have access to a cluster to check it]

Unable to start node manager on hadoop 0.23.0

while installing Hadoop 0.23.0 on the cluster, node manager is unable to start and getting the following error.
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: 0.0.0.0/0.0.0.0:8080
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:303)
at org.apache.hadoop.mapred.ShuffleHandler.start(ShuffleHandler.java:255)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.start(AuxServices.java:123)
at org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
... 4 more
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
It means 8080 is already in use.
so
sudo netstat -nvvpa |grep 8080 and see the o/p.
if it listens to any java, then if possible stop the process. and then try to start the nodemanager again.
This made my problem solved. thankyou.

Resources