Clickhouse server error - org.freedesktop.PolicyKit1 - clickhouse

I am getting this error when i am trying to restart my clickhouse server.
Failed to start clickhouse-server.service: The name org.freedesktop.PolicyKit1 was not provided by any .service files
See system logs and 'systemctl status clickhouse-server.service' for details.
Upon further inspection of server. We noticed that Log directory was full. After flushing the logs clickhouse server restarted normally. But the error message made no sense cite the actual problem. then what is this error pointing to ? Pls enlight

org.freedesktop.PolicyKit1 , is like sudo but for systemd. It should be enabled for systemd to work. Resolved it by accessing ec2 superuser privelage.
sudo su

Related

ERROR: Failed to determine the health of the cluster

I am running Elasticsearch and kibana, I am not sure of the status of my elasticsearsh cluster (if its red, yellow, or green) but it seems I need to get a token generated by elasticsearch as in the screenshot when I ran bin/elasticsearch-create-enrollment-token --scope kibana from the right directory it errors out ERROR: Failed to determine the health of the cluster..
According Ioannis Kakavas in discuss.elastic, "CLI tools extending BaseRunAsSuperuserCommand should only connect to the local node". When I run in a local node, it works. But when I run in the elasticsearch container in a cluster, it doesn't work. The solution was execute the elastic-search-reset-password and elasticsearch-create-enrollment-token scripts, respectively, like this (inside the elasticsearch container):
/usr/share/elasticsearch/bin/elasticsearch-reset-password -i -u elastic --url https://localhost:9200
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana --url https://localhost:9200
I encountered the same problem, and I just redid the process - unzipped the ES and kibana zip files again, and ran bin/elasticsearch in the newly created directory. Look for a message that is encapsulated in a formatted box that contains both the password for the elastic user, and the enrollment token for Kibana (the token is only valid for 30 minutes). This message will only appear once, the first time you run elasticsearch.
I proceeded to run bin/kibana for Kibana and configured it in the browser, and everything worked out from there. Hope this helps!
I have the exact issue:
$ sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
ERROR: Failed to determine the health of the cluster.
But after I restart the elasticsearch service:
$ sudo systemctl restart elasticsearch.service
then it works:
This tool will reset the password of the [elastic] user to an autogenerated value.
The password will be printed in the console.
Please confirm that you would like to continue [y/N]y
Password for the [elastic] user successfully reset.
New value: xxxxxx
Two possible solutions:
Make sure that you have enough disk space.
Your VPN might be causing the issue.
The enrollment Token will be present in the terminal itself. You just need to scroll up till you find it when you are installing.
The reason for the error - ERROR: Failed to determine the health of the cluster is due to the fact that Elastic has not been installed yet and running that command is like calling a function without defining it.

cloudera host with bad health during install

Trying again & again with all required steps completed but cluster Installation when install selected Parcels, always shows every host with bad health. setup never completed at full.
i am installing cm 5.5 on CentOS 6.7 using virtualbox.
The Error
Host is in bad health cm.feuni.edu
Host is in bad health dn1.feuni.edu
Host is in bad health dn2.feuni.edu
Host is in bad health nn1.feuni.edu
Host is in bad health nn2.feuni.edu
Host is in bad health rm.feuni.edu
above error are shown on step 6 where setup says
The selected parcels are being downloaded and installed on all the hosts in the cluster
in previous step 5 all hosts were completed with heartbeat checks in the end
memory distributions
cm 8GB
all others with 1GB
i could not find proper answer anywhere else. What reason could be for the bad health?
I don't know if it will help you...
For me, after a few days I struggled with it,
I found the log files (at )
It had a comment there is a mismatch of the guid,
so I uninstalled everything from both machines (using the script they give,/usr/share/cmf/uninstall-cloudera-manager.sh , yum remove 'cloudera-manager-*' and deletion of every directory related to cloudera I found...)
and then removed the guid file:
rm /var/lib/cloudera-scm-agent/cm_guid
Afterwards I re-installed everything, and that fixed that issue for me...
I read online that there can be issues with the hostname and things like that, but I guess that if you get to this part of the installation, you already fixed all the domain/FDQN/hosname/hosts issues.
It saddens me there is no real manual/FAQ for this product.. :(
Good luck!
I faced the same problem. This is my solution:
First I edited config.ini
$ nano /etc/cloudera-scm-agent/config.ini
so that the hostname where the same as the command $ hostname returned.
then I restarted the agent and the server of cloudera:
$ service cloudera-scm-agent restart
$ service cloudera-scm-server restart
then in cloudera manager I deleted the cluster and added again. The wizard continued to run normally.

how to manually start/stop hadoop services on boot up/down?

Hi is someone aware about stopping and starting CDH(cloudera distribution Hadoop) Services with script we are doing this for production servers. For an instance if servers are restarted then before reboot all the Hadoop services stops gracefully and on startup the start.
I have a 8 Node Hadoop cluster on RHEL with cloudera 5.4.7 installed on it.
Till now i have identified few ways to do that one is here on link it says i have to use chkconfig to register the service on OS for eg as below:
sudo chkconfig hadoop-hdfs-namenode on
But when i am doing that i am getting error as
error reading information on service hadoop-hdfs-namenode: No such file or directory
which clearly states that it is unable to find the file i have specifed.
Then i searched for file and it is located in
/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/etc/rc.d/init.d/hadoop-hdfs-namenode
/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/etc/default/hadoop-hdfs-namenode
the i tried executing the same commands from the folder itself where files are located but the same error. The permissions are fine on file and tried ./ as well but same error.
I am also able to list all the process which are currently running by
sudo jps
14035 -- process information unavailable
10615 -- process information unavailable
15323 -- process information unavailable
5486 -- process information unavailable
2001 -- process information unavailable
46991 -- process information unavailable
42667 -- process information unavailable
33732 Jps
2698 -- process information unavailable
2727 -- process information unavailable
7901 -- process information unavailable
42624 -- process information unavailable
As one can see process names are not coming but these are hadoop process so to stop the process i can kill all of them but this is not the way to gracefully stop hadoop managed by cloudera. Please let me know if anyone is aware of anything which can help me moving forward.
Thanks to cloudera they provide a way to boot services on system startup. Below is the way to do that:
Click on the service
Go to the configuration
Search for Automatically Restart Process
Check the Check-Box.
It will restart the services on bootup.
you can do this by executing curl command form shell script. For example to start solr service you can use
curl -u admin:admin -X POST http://ipaddress:7180/api/v4/clusters//services/solr1/commands/start -H 'Content-type:aplication/json; charset=utf-8';
For More details on the visit
http://cloudera.github.io/cm_api/apidocs/v10/index.html

Impala The Cloudera Manager Agent got an unexpected response from this role's web server

i have done an hadoop cluster installation with cloudera manager. After this installation impala status has become bad.
I have the following error for master node:
Web Server Status
and this one for nodes with imapala daemon:
Impala Daemon Ready Check, Web Server Status
looking into logs i have found some errors:
The health test result for IMPALAD_WEB_METRIC_COLLECTION has become bad: The Cloudera Manager Agent got an unexpected response from this role's web server.
looking into cloudera-scm-agent.log there are those errors:
1261 Monitor-HostMonitor throttling_logger ERROR (29 skipped) Failed to collect NTP metrics
i tryed to install NTP (sudo apt-get install ntp) but after this installation HDFS, HIVE, YARN and others services goes bad, removing that only impala goes bad.
MainThread agent ERROR Failed to connect to previous supervisor.
Another error is this:
Monitor-GenericMonitor throttling_logger ERROR Error fetching metrics at 'http://nodo-1:50075/jmx'
i tried looking all hostnames and seems correct...
so, what is this problem? how can i solve it?
I also had problem with NTP, the problem still existed after installing NTP , but when I done sudo service ntp restart the error was fixed

mongodb : listen() attempts to access socket in a forbidden way

I downloaded 64-bit zipped version of mongodb for windows, created '/data/db' as instructed.
Now, when I run "mongod" command, I am getting the following error & the mongodb server shuts down automatically.
"ERROR : listen() failed error-10013. An attempt was made to access socket in a way forbidden by its access permissions. "
Please help me to clear the firewall settings in windows to prevent this error & run mongodb.
I was able to fix the error by using the following command : "mongod --bind_ip="127.0.0.1". :)
This error also seems to happen when mongod is already running. On Windows 10, mongod will be listed under Background Processes in the Task Manager if it is running. If it is already running, ending the task should allow you to run mongod again without this error occurring. Also check that it is not running as a service; it may be set to restart automatically.
Also, if you have a docker container running mongodb, you also get this error. If you stop your container(s) running mongodb, then it should start up.
I was able to fix this issue by allowing access for Mongo Db Server Application under firewall settings in my antivirus settings.
After you did the above step,open the cmd as administrator and go to the bin path of mongodb application in your system.
Then run the below command.
mongod
Note : try the above steps only after you tried the below steps
1) https://docs.mongodb.com/manual/tutorial/configure-windows-netsh-firewall/
2)https://www.tomshardware.com/news/how-to-open-firewall-ports-in-windows-10,36451.html
I ran across a similar error which is why I ended up on this thread. For me, my solution was that McAfee Antivirus was blocking MongoDB.
The initial error basically showed that access was denied for mongo:
mongo error
I was able to do a search on the internet and found steps to allow MongoDB to run under McAfee Antivirus software by changing the setting for the app directly.
mcaffee settings
When I located MongoDB in the apps requesting internet access, it was initially set to blocked. I selected the app, clicked on edit and changed it to 'Designated ports'.
mongodb settings changed
Now, I am able to run mongo whether the mongod service is started automatically or if I start it manually in a hyper terminal window.

Resources