I'm trying to, unsuccessfully, start the neo4j server. I've followed the guide, http://docs.neo4j.org/chunked/snapshot/server-installation.html#_mac_osx_service, but keep getting the following error:
Starting Neo4j Server...WARNING: not changing user
process [22112]... waiting for server to be ready......................................................................................................................... BAD.
Any ideas?
I was attempting to open multiple instances. Simple grep showed that the process was already running.
For reference some commands:
./bin/neo4j start
./bin/neo4j restart
./bin/neo4j stop
./bin/neo4j info
I closed down my node servers and it fixed this problem.
Related
In a virtual box I have a Debian that I sometimes want to run without X. So I edited /etc/grub.d/10_linux and added another menu item with a kernel option "nox" appended. Then I added a line to /lib/systemd/system/lightdm.service, Section [Unit]:
ConditionKernelCommandLine=!nox
However, when starting this, it hangs with the message:
A start job is running for Hold until boot process finishes up (56min / no limit)
Thank you, systemd for informing me about that. I wouldn't have noticed. Yet, I would like to know, which job it is that's hanging.
The system allows me to connect via SSH, but none of the systemctl or journalctl commands I tried did tell me the name of the service causing the problem. lightdm.service itself seems to be satisfied.
I known it's a but late, but I just found out that one can use:
systemctl list-jobs
to find out what units are waiting or running at any given moment.
By adding systemd.debug-shell=1 to the kernel command line, a root shell will be available on TTY9 (crlt+alt+F9) to run the command above.
I first tried "systemd-analyze", and that gave me the message about "systemctl list-jobs".
hope this helps someone with similar problems.
If i try to run below command in order to start kafka server on CMD (Command Propmt)
C:\kafka_2.12-0.11.0.0\bin\windows\kafka-server-start.bat ..\..\config\server.properties
I get error as below
Question:
I just started to learn kafka so if i try to run above command i can not start kafka server.Where i miss exactly ? I still get error despite i delete log files.How can i start kafka server ?
Any help will be appreciated.
Thanks.
See if the log.dirs configuration is wrong
I had the same problem. I restarted my machine thinking the file in contention will be cleared after restart. That didn't help. So, I cleared out the entire tmp\kafka-logs directory and restarted the kafka server and it worked fine.
I need to change the server_name of a running rethinkdb instance. I have stopped the server and update the /etc/rethinkdb/instances.d/default.conf file and then removed the metadata & rethinkdb_data from /var/lib/rethinkdb/default/data location. Finally when i executed the rethinkdb --config-file /etc/rethinkdb/instance.d/default.conf it shows that server is ready but didn't came back to bash shell prompt.
Can someone clarify on this.
Thanks in advance.
To start the rethinkdb daemon in the background, use:
/etc/init.d/rethinkdb start
It will automatically read the configuration file in /etc/rethinkdb/instance.d/default.conf
Hi is someone aware about stopping and starting CDH(cloudera distribution Hadoop) Services with script we are doing this for production servers. For an instance if servers are restarted then before reboot all the Hadoop services stops gracefully and on startup the start.
I have a 8 Node Hadoop cluster on RHEL with cloudera 5.4.7 installed on it.
Till now i have identified few ways to do that one is here on link it says i have to use chkconfig to register the service on OS for eg as below:
sudo chkconfig hadoop-hdfs-namenode on
But when i am doing that i am getting error as
error reading information on service hadoop-hdfs-namenode: No such file or directory
which clearly states that it is unable to find the file i have specifed.
Then i searched for file and it is located in
/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/etc/rc.d/init.d/hadoop-hdfs-namenode
/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/etc/default/hadoop-hdfs-namenode
the i tried executing the same commands from the folder itself where files are located but the same error. The permissions are fine on file and tried ./ as well but same error.
I am also able to list all the process which are currently running by
sudo jps
14035 -- process information unavailable
10615 -- process information unavailable
15323 -- process information unavailable
5486 -- process information unavailable
2001 -- process information unavailable
46991 -- process information unavailable
42667 -- process information unavailable
33732 Jps
2698 -- process information unavailable
2727 -- process information unavailable
7901 -- process information unavailable
42624 -- process information unavailable
As one can see process names are not coming but these are hadoop process so to stop the process i can kill all of them but this is not the way to gracefully stop hadoop managed by cloudera. Please let me know if anyone is aware of anything which can help me moving forward.
Thanks to cloudera they provide a way to boot services on system startup. Below is the way to do that:
Click on the service
Go to the configuration
Search for Automatically Restart Process
Check the Check-Box.
It will restart the services on bootup.
you can do this by executing curl command form shell script. For example to start solr service you can use
curl -u admin:admin -X POST http://ipaddress:7180/api/v4/clusters//services/solr1/commands/start -H 'Content-type:aplication/json; charset=utf-8';
For More details on the visit
http://cloudera.github.io/cm_api/apidocs/v10/index.html
I downloaded 64-bit zipped version of mongodb for windows, created '/data/db' as instructed.
Now, when I run "mongod" command, I am getting the following error & the mongodb server shuts down automatically.
"ERROR : listen() failed error-10013. An attempt was made to access socket in a way forbidden by its access permissions. "
Please help me to clear the firewall settings in windows to prevent this error & run mongodb.
I was able to fix the error by using the following command : "mongod --bind_ip="127.0.0.1". :)
This error also seems to happen when mongod is already running. On Windows 10, mongod will be listed under Background Processes in the Task Manager if it is running. If it is already running, ending the task should allow you to run mongod again without this error occurring. Also check that it is not running as a service; it may be set to restart automatically.
Also, if you have a docker container running mongodb, you also get this error. If you stop your container(s) running mongodb, then it should start up.
I was able to fix this issue by allowing access for Mongo Db Server Application under firewall settings in my antivirus settings.
After you did the above step,open the cmd as administrator and go to the bin path of mongodb application in your system.
Then run the below command.
mongod
Note : try the above steps only after you tried the below steps
1) https://docs.mongodb.com/manual/tutorial/configure-windows-netsh-firewall/
2)https://www.tomshardware.com/news/how-to-open-firewall-ports-in-windows-10,36451.html
I ran across a similar error which is why I ended up on this thread. For me, my solution was that McAfee Antivirus was blocking MongoDB.
The initial error basically showed that access was denied for mongo:
mongo error
I was able to do a search on the internet and found steps to allow MongoDB to run under McAfee Antivirus software by changing the setting for the app directly.
mcaffee settings
When I located MongoDB in the apps requesting internet access, it was initially set to blocked. I selected the app, clicked on edit and changed it to 'Designated ports'.
mongodb settings changed
Now, I am able to run mongo whether the mongod service is started automatically or if I start it manually in a hyper terminal window.