Launching cluster-openstack - cluster-computing

I am working on a project Openstack in linux ubuntu 20.04. I want to create a cluster hadoop with one master-node and three worker-nodes and i have a problem with a cluster that doesn't work.
Status ERROR:
Creating cluster failed for the following reason(s): Failed to create trust Error ID: ef5e8b0a-8e6d-4878-bebb-f37f4fa50a88, Failed to create trust Error ID: 43157255-86af-4773-96c1-a07ca7ac66ed
links: https://docs.openstack.org/devstack/latest/
Can you guys advise me about these errors. Is there anything to worry about?

Related

Hadoop: ERROR BlockSender.sendChunks() exception

I have a cluster to use Hadoop (one master that works as namenode and datanode, and two slaves). I saw in the log files these messages of error:
hadoop-hduser-datanode-master.log file:
2017-05-15 13:02:55,303 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() exception:
java.io.IOException: Tubería rota
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:570)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:739)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:527)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:239)
at java.lang.Thread.run(Thread.java:748)
That happened only in the master node, after a while of inactivity. Fifteen minutes before, I ran a wordcount example successfully.
The OS in each node is Ubuntu 16.04. The cluster was created using VirtualBox.
Could you help me, please?
I followed this link:
https://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
to configure some parameters about the memory and my problem was resolved!
Note: in some posts I read that this error could be for lack of disk space (non in my case) and in other the reason was the SO's version (they recommended downgrade Ubuntu 16.04 to 14.04).

hue said Resource Manager not available error but running fine

when i run the quick start met the error message
Potential misconfiguration detected. Fix and restart Hue.
Resource Manager : Failed to contact an active Resource Manager: YARN RM returned a failed response: HTTPConnectionPool(host='localhost', port=8088): Max retries exceeded with url: /ws/v1/cluster/apps?user=hue (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))
Hive : Failed to access Hive warehouse: /user/hive/warehouse
HBase Browser : The application won't work without a running HBase Thrift Server v1.
Impala : No available Impalad to send queries to.
Oozie Editor/Dashboard : The app won't work without a running Oozie server
Pig Editor : The app won't work without a running Oozie server
Spark : The app won't work without a running Livy Spark Server
i don't know why hue said error for resource manager.
i didn't install another things yet.
my resource manager is running and that api is no problem this - http://RMHOST:8088/ws/v1/cluster/apps?user=hue
response is
{
"apps": null
}
is there any problem i missed?
I changed localhost to My IP address like 192.168.x.x in resourcemanager_host, resourcemanager_api_url, proxy_api_url
I don't know why it works

Failed to start HDFS service in Cloudera VM

When I am trying to start the HDFS, I am getting error:
Service did not start successfully; not all of the required roles
started: only 0/2 roles started. Reasons : Service has only 0 NameNode
roles running instead of minimum required 1.
How can I go about resolving this issue? Due to this issue, I am not able to work on this Cloudera VM.

Percona Xtradb Cluster nodes won't start

I setup percona_xtradb_cluster-56 with three nodes in the cluster. To start the first cluster, i use the following command and it starts just fine:
#/etc/init.d/mysql bootstrap-pxc
The other two nodes however fail to start when i start them normally using the command:
#/etc/init.d/mysql start
The error i am getting is "The server quit without updating the PID file". The error log contains this message:
Error in my_thread_global_end(): 1 threads didn't exit 150605 22:10:29
mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended.
The cluster nodes are running all Ubuntu 14.04. When i use percona-xtradb-cluster5.5, the cluster ann all the nodes run just fine as expected. But i need to use version 5.6 because i am also using GTID which is only available in version 5.6 and not supported in earlier versions.
I was following these two percona documentation to setup the cluster:
https://www.percona.com/doc/percona-xtradb-cluster/5.6/installation.html#installation
https://www.percona.com/doc/percona-xtradb-cluster/5.6/howtos/ubuntu_howto.html
Any insight or suggestions on how to resolve this issue would be highly appreciated.
The problem is related to memory, as "The Georgia" writes. There should be at least 500MB for default setup and bootstrapping. See here http://sysadm.pp.ua/linux/px-cluster.html

Command 'ZkStartPreservingDatastore' failed for service 'zookeeper1'

I need your help.I have a question with hadoop clustors starting.the error :
the error :Command 'ZkStartPreservingDatastore' failed for service 'zookeeper1'
please help me ,thank you.
Hope this helps. I've encountered the same issue on our production CDH4 cluster. I've tried to restart the service manually, and what I've noticed in the logs was the following error message:
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#193] - Too many connections from /10.0.2.133 - max is 50
So, I've edited /etc/zookeeper/conf/zoo.cfg and changed the maxClientCnxns to 500. After that I've manged to restart the zookeeper, and cluster came back.

Resources